Connect with us
https://tickernews.co/wp-content/uploads/2023/10/AmEx-Thought-Leaders.jpg

Ticker Views

Chunk aims to revolutionise plant-based dining experience

Published

on

Chunk aims to revolutionise eating with plant-based steak alternatives, says founder Amos Golan

In Short:
– Chunk creates plant-based whole-cut products that mimic steak, focusing on high protein and low calories.
– Founder Amos Golan aims for nationwide availability in the US to encourage plant-based meal options.

Amos Golan, founder of Chunk, is transforming the way we eat with plant-based whole-cut products that replicate real steak. Offering high protein, low calories, and zero cholesterol, these steaks are produced without gums, stabilizers, or seed oils. The company has quickly gained traction with thousands of food-service partners across North America.

Founded five years ago in Brooklyn, Chunk now has around 50 employees in the US and Israel. Manufacturing takes place in Israel, with products distributed widely throughout the US. Unlike many plant-based brands that focus on ground beef alternatives, Chunk is creating whole cuts like steaks and roasts — providing a juicy, fibrous texture at competitive prices.

Using soy flour and proprietary solid-state fermentation technology, Chunk enhances both texture and flavor while reducing the environmental footprint compared to traditional beef. With plans for nationwide US rollout, partnerships with Whole Foods and AGB, and ambitions to become a regular option for plant-based eaters, Amos Golan and his team are aiming to redefine everyday meals.

For more information visit Chunk


Download the Ticker app

Ticker Views

Israel’s thriving startup ecosystem fuels innovation and resilience

Published

on

Israel’s dynamic startup scene thrives on necessity and resilience, says Rafael Singer, amid rising innovations from conflict challenges

In Short:
– Israel excels in innovation and startups, driven by necessity and resilience from historical challenges.
– Investment opportunities are growing, with a focus on technologies promoting peace and regional collaboration.

Israel calls itself an “innovation island,” and according to Raphael Singer — Director of Climate & Sustainability at the Ministry of Foreign Affairs — that title is well earned.

In this in-depth conversation, he explains how a lack of natural resources forced Israel to innovate early, building agriculture, water tech, and climate solutions from the ground up.

He discusses how Israel’s culture of embracing failure is central to its entrepreneurial strength, and why government investment remains critical to sustaining a nation with the world’s highest startup rate per capita. The defence sector’s R&D continues to spill into civilian life, powering everything from food security to climate resilience.

Singer also explores what other nations can learn from Israel’s approach to building a future-ready economy — one rooted in resilience, creativity, and rapid adaptation.

Israel wants the world to know its tech ecosystem remains open, active, and hungry for global partnerships. Collaboration with regional neighbours on issues like water security, climate challenges, and sustainability is seen as a pathway to long-term peace, reinforced by initiatives like the Abraham Accords.

For more information, visit the Ministry of Foreign Affairs

Ahron Young traveled to Israel as a guest of the Foreign Ministry climate delegation.
Download the Ticker app

Continue Reading

Ticker Views

PLANETech Week connects startups and investors for sustainability

PLANETech Week unites startups and investors to tackle climate challenges and promote Israeli innovations, says Dan Bakola

Published

on

Inside PLANETech Week: How Israeli climate tech is targeting emerging markets

In Short:
– PLANETech Week unites Israeli startups, investors, and policy leaders to tackle climate technology challenges in emerging markets.
– The Marketplace connects innovative Israeli startups with customers, especially in developing regions, to promote sustainability.

PLANETech Week brings together the world’s leading climate innovators to accelerate the scaling of climate technologies into emerging markets — the regions where emissions are rising fastest.

The event unites startups, investors, and policy leaders to solve the financial, regulatory, and infrastructure barriers slowing global climate deployment.

Speaking from Tel Aviv, Dan Bakola highlights how Israel’s climate ecosystem — home to more than 10,000 startups — is using technology to drive sustainability across agriculture, energy, materials, and the ocean economy.

A major part of the mission is Market Square, an online matchmaking platform connecting startups with investors, customers, multinationals, and partners across the developing world.

Climate solutions

With simple yet powerful technologies born out of Israel’s own challenges — from desert conditions to water scarcity — the country is aiming to share climate solutions with the world. PLANETech Week creates the environment for collaboration, connection, and global impact.

Israel’s transition from a developing country to a high-income nation offers valuable insights. The country’s experience in overcoming harsh environmental conditions has spurred innovative technologies applicable to global challenges.

For more information, visit PLANETech

Ahron Young traveled to Israel as a guest of the Foreign Ministry climate delegation.
Download the Ticker app

Continue Reading

Ticker Views

AI’s errors may be impossible to eliminate – what that means for its use in health care

Published

on

AI’s errors may be impossible to eliminate – what that means for its use in health care

Federal legislation introduced in early 2025 proposed allowing AI to prescribe medication.
Wladimir Bulgar/Science Photo Library via Getty Images

Carlos Gershenson, Binghamton University, State University of New York

In the past decade, AI’s success has led to uncurbed enthusiasm and bold claims – even though users frequently experience errors that AI makes. An AI-powered digital assistant can misunderstand someone’s speech in embarrassing ways, a chatbot could hallucinate facts, or, as I experienced, an AI-based navigation tool might even guide drivers through a corn field – all without registering the errors.

People tolerate these mistakes because the technology makes certain tasks more efficient. Increasingly, however, proponents are advocating the use of AI – sometimes with limited human supervision – in fields where mistakes have high cost, such as health care. For example, a bill introduced in the U.S. House of Representatives in early 2025 would allow AI systems to prescribe medications autonomously. Health researchers as well as lawmakers since then have debated whether such prescribing would be feasible or advisable.

How exactly such prescribing would work if this or similar legislation passes remains to be seen. But it raises the stakes for how many errors AI developers can allow their tools to make and what the consequences would be if those tools led to negative outcomes – even patient deaths.

As a researcher studying complex systems, I investigate how different components of a system interact to produce unpredictable outcomes. Part of my work focuses on exploring the limits of science – and, more specifically, of AI.

Over the past 25 years I have worked on projects including traffic light coordination, improving bureaucracies and tax evasion detection. Even when these systems can be highly effective, they are never perfect.

For AI in particular, errors might be an inescapable consequence of how the systems work. My lab’s research suggests that particular properties of the data used to train AI models play a role. This is unlikely to change, regardless of how much time, effort and funding researchers direct at improving AI models.

Nobody – and nothing, not even AI – is perfect

As Alan Turing, considered the father of computer science, once said: “If a machine is expected to be infallible, it cannot also be intelligent.” This is because learning is an essential part of intelligence, and people usually learn from mistakes. I see this tug-of-war between intelligence and infallibility at play in my research.

In a study published in July 2025, my colleagues and I showed that perfectly organizing certain datasets into clear categories may be impossible. In other words, there may be a minimum amount of errors that a given dataset produces, simply because of the fact that elements of many categories overlap. For some datasets – the core underpinning of many AI systems – AI will not perform better than chance.

A portrait of seven dogs of different breeds.
Features of different dog breeds may overlap, making it hard for some AI models to differentiate them.
MirasWonderland/iStock via Getty Images Plus

For example, a model trained on a dataset of millions of dogs that logs only their age, weight and height will probably distinguish Chihuahuas from Great Danes with perfect accuracy. But it may make mistakes in telling apart an Alaskan malamute and a Doberman pinscher, since different individuals of different species might fall within the same age, weight and height ranges.

This categorizing is called classifiability, and my students and I started studying it in 2021. Using data from more than half a million students who attended the Universidad Nacional Autónoma de México between 2008 and 2020, we wanted to solve a seemingly simple problem. Could we use an AI algorithm to predict which students would finish their university degrees on time – that is, within three, four or five years of starting their studies, depending on the major?

We tested several popular algorithms that are used for classification in AI and also developed our own. No algorithm was perfect; the best ones − even one we developed specifically for this task − achieved an accuracy rate of about 80%, meaning that at least 1 in 5 students were misclassified. We realized that many students were identical in terms of grades, age, gender, socioeconomic status and other features – yet some would finish on time, and some would not. Under these circumstances, no algorithm would be able to make perfect predictions.

You might think that more data would improve predictability, but this usually comes with diminishing returns. This means that, for example, for each increase in accuracy of 1%, you might need 100 times the data. Thus, we would never have enough students to significantly improve our model’s performance.

Additionally, many unpredictable turns in lives of students and their families – unemployment, death, pregnancy – might occur after their first year at university, likely affecting whether they finish on time. So even with an infinite number of students, our predictions would still give errors.

The limits of prediction

To put it more generally, what limits prediction is complexity. The word complexity comes from the Latin plexus, which means intertwined. The components that make up a complex system are intertwined, and it’s the interactions between them that determine what happens to them and how they behave.

Thus, studying elements of the system in isolation would probably yield misleading insights about them – as well as about the system as a whole.

Take, for example, a car traveling in a city. Knowing the speed at which it drives, it’s theoretically possible to predict where it will end up at a particular time. But in real traffic, its speed will depend on interactions with other vehicles on the road. Since the details of these interactions emerge in the moment and cannot be known in advance, precisely predicting what happens to the the car is possible only a few minutes into the future.

AI is already playing an enormous role in health care.

Not with my health

These same principles apply to prescribing medications. Different conditions and diseases can have the same symptoms, and people with the same condition or disease may exhibit different symptoms. For example, fever can be caused by a respiratory illness or a digestive one. And a cold might cause cough, but not always.

This means that health care datasets have significant overlaps that would prevent AI from being error-free.

Certainly, humans also make errors. But when AI misdiagnoses a patient, as it surely will, the situation falls into a legal limbo. It’s not clear who or what would be responsible if a patient were hurt. Pharmaceutical companies? Software developers? Insurance agencies? Pharmacies?

In many contexts, neither humans nor machines are the best option for a given task. “Centaurs,” or “hybrid intelligence” – that is, a combination of humans and machines – tend to be better than each on their own. A doctor could certainly use AI to decide potential drugs to use for different patients, depending on their medical history, physiological details and genetic makeup. Researchers are already exploring this approach in precision medicine.

But common sense and the precautionary principle
suggest that it is too early for AI to prescribe drugs without human oversight. And the fact that mistakes may be baked into the technology could mean that where human health is at stake, human supervision will always be necessary.The Conversation

Carlos Gershenson, Professor of Innovation, Binghamton University, State University of New York

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Continue Reading

Trending Now