Connect with us
https://tickernews.co/wp-content/uploads/2023/10/AmEx-Thought-Leaders.jpg

Ticker Views

PLANETech Week connects startups and investors for sustainability

PLANETech Week unites startups and investors to tackle climate challenges and promote Israeli innovations, says Dan Bakola

Published

on

Inside PLANETech Week: How Israeli climate tech is targeting emerging markets

In Short:
– PLANETech Week unites Israeli startups, investors, and policy leaders to tackle climate technology challenges in emerging markets.
– The Marketplace connects innovative Israeli startups with customers, especially in developing regions, to promote sustainability.

PLANETech Week brings together the world’s leading climate innovators to accelerate the scaling of climate technologies into emerging markets — the regions where emissions are rising fastest.

The event unites startups, investors, and policy leaders to solve the financial, regulatory, and infrastructure barriers slowing global climate deployment.

Speaking from Tel Aviv, Dan Bakola highlights how Israel’s climate ecosystem — home to more than 10,000 startups — is using technology to drive sustainability across agriculture, energy, materials, and the ocean economy.

A major part of the mission is Market Square, an online matchmaking platform connecting startups with investors, customers, multinationals, and partners across the developing world.

Climate solutions

With simple yet powerful technologies born out of Israel’s own challenges — from desert conditions to water scarcity — the country is aiming to share climate solutions with the world. PLANETech Week creates the environment for collaboration, connection, and global impact.

Israel’s transition from a developing country to a high-income nation offers valuable insights. The country’s experience in overcoming harsh environmental conditions has spurred innovative technologies applicable to global challenges.

For more information, visit PLANETech

Ahron Young traveled to Israel as a guest of the Foreign Ministry climate delegation.
Download the Ticker app

Ticker Views

Chunk aims to revolutionise plant-based dining experience

Published

on

Chunk aims to revolutionise eating with plant-based steak alternatives, says founder Amos Golan

In Short:
– Chunk creates plant-based whole-cut products that mimic steak, focusing on high protein and low calories.
– Founder Amos Golan aims for nationwide availability in the US to encourage plant-based meal options.

Amos Golan, founder of Chunk, is transforming the way we eat with plant-based whole-cut products that replicate real steak. Offering high protein, low calories, and zero cholesterol, these steaks are produced without gums, stabilizers, or seed oils. The company has quickly gained traction with thousands of food-service partners across North America.

Founded five years ago in Brooklyn, Chunk now has around 50 employees in the US and Israel. Manufacturing takes place in Israel, with products distributed widely throughout the US. Unlike many plant-based brands that focus on ground beef alternatives, Chunk is creating whole cuts like steaks and roasts — providing a juicy, fibrous texture at competitive prices.

Using soy flour and proprietary solid-state fermentation technology, Chunk enhances both texture and flavor while reducing the environmental footprint compared to traditional beef. With plans for nationwide US rollout, partnerships with Whole Foods and AGB, and ambitions to become a regular option for plant-based eaters, Amos Golan and his team are aiming to redefine everyday meals.

For more information visit Chunk


Download the Ticker app

Continue Reading

Ticker Views

AI’s errors may be impossible to eliminate – what that means for its use in health care

Published

on

AI’s errors may be impossible to eliminate – what that means for its use in health care

Federal legislation introduced in early 2025 proposed allowing AI to prescribe medication.
Wladimir Bulgar/Science Photo Library via Getty Images

Carlos Gershenson, Binghamton University, State University of New York

In the past decade, AI’s success has led to uncurbed enthusiasm and bold claims – even though users frequently experience errors that AI makes. An AI-powered digital assistant can misunderstand someone’s speech in embarrassing ways, a chatbot could hallucinate facts, or, as I experienced, an AI-based navigation tool might even guide drivers through a corn field – all without registering the errors.

People tolerate these mistakes because the technology makes certain tasks more efficient. Increasingly, however, proponents are advocating the use of AI – sometimes with limited human supervision – in fields where mistakes have high cost, such as health care. For example, a bill introduced in the U.S. House of Representatives in early 2025 would allow AI systems to prescribe medications autonomously. Health researchers as well as lawmakers since then have debated whether such prescribing would be feasible or advisable.

How exactly such prescribing would work if this or similar legislation passes remains to be seen. But it raises the stakes for how many errors AI developers can allow their tools to make and what the consequences would be if those tools led to negative outcomes – even patient deaths.

As a researcher studying complex systems, I investigate how different components of a system interact to produce unpredictable outcomes. Part of my work focuses on exploring the limits of science – and, more specifically, of AI.

Over the past 25 years I have worked on projects including traffic light coordination, improving bureaucracies and tax evasion detection. Even when these systems can be highly effective, they are never perfect.

For AI in particular, errors might be an inescapable consequence of how the systems work. My lab’s research suggests that particular properties of the data used to train AI models play a role. This is unlikely to change, regardless of how much time, effort and funding researchers direct at improving AI models.

Nobody – and nothing, not even AI – is perfect

As Alan Turing, considered the father of computer science, once said: “If a machine is expected to be infallible, it cannot also be intelligent.” This is because learning is an essential part of intelligence, and people usually learn from mistakes. I see this tug-of-war between intelligence and infallibility at play in my research.

In a study published in July 2025, my colleagues and I showed that perfectly organizing certain datasets into clear categories may be impossible. In other words, there may be a minimum amount of errors that a given dataset produces, simply because of the fact that elements of many categories overlap. For some datasets – the core underpinning of many AI systems – AI will not perform better than chance.

A portrait of seven dogs of different breeds.
Features of different dog breeds may overlap, making it hard for some AI models to differentiate them.
MirasWonderland/iStock via Getty Images Plus

For example, a model trained on a dataset of millions of dogs that logs only their age, weight and height will probably distinguish Chihuahuas from Great Danes with perfect accuracy. But it may make mistakes in telling apart an Alaskan malamute and a Doberman pinscher, since different individuals of different species might fall within the same age, weight and height ranges.

This categorizing is called classifiability, and my students and I started studying it in 2021. Using data from more than half a million students who attended the Universidad Nacional Autónoma de México between 2008 and 2020, we wanted to solve a seemingly simple problem. Could we use an AI algorithm to predict which students would finish their university degrees on time – that is, within three, four or five years of starting their studies, depending on the major?

We tested several popular algorithms that are used for classification in AI and also developed our own. No algorithm was perfect; the best ones − even one we developed specifically for this task − achieved an accuracy rate of about 80%, meaning that at least 1 in 5 students were misclassified. We realized that many students were identical in terms of grades, age, gender, socioeconomic status and other features – yet some would finish on time, and some would not. Under these circumstances, no algorithm would be able to make perfect predictions.

You might think that more data would improve predictability, but this usually comes with diminishing returns. This means that, for example, for each increase in accuracy of 1%, you might need 100 times the data. Thus, we would never have enough students to significantly improve our model’s performance.

Additionally, many unpredictable turns in lives of students and their families – unemployment, death, pregnancy – might occur after their first year at university, likely affecting whether they finish on time. So even with an infinite number of students, our predictions would still give errors.

The limits of prediction

To put it more generally, what limits prediction is complexity. The word complexity comes from the Latin plexus, which means intertwined. The components that make up a complex system are intertwined, and it’s the interactions between them that determine what happens to them and how they behave.

Thus, studying elements of the system in isolation would probably yield misleading insights about them – as well as about the system as a whole.

Take, for example, a car traveling in a city. Knowing the speed at which it drives, it’s theoretically possible to predict where it will end up at a particular time. But in real traffic, its speed will depend on interactions with other vehicles on the road. Since the details of these interactions emerge in the moment and cannot be known in advance, precisely predicting what happens to the the car is possible only a few minutes into the future.

AI is already playing an enormous role in health care.

Not with my health

These same principles apply to prescribing medications. Different conditions and diseases can have the same symptoms, and people with the same condition or disease may exhibit different symptoms. For example, fever can be caused by a respiratory illness or a digestive one. And a cold might cause cough, but not always.

This means that health care datasets have significant overlaps that would prevent AI from being error-free.

Certainly, humans also make errors. But when AI misdiagnoses a patient, as it surely will, the situation falls into a legal limbo. It’s not clear who or what would be responsible if a patient were hurt. Pharmaceutical companies? Software developers? Insurance agencies? Pharmacies?

In many contexts, neither humans nor machines are the best option for a given task. “Centaurs,” or “hybrid intelligence” – that is, a combination of humans and machines – tend to be better than each on their own. A doctor could certainly use AI to decide potential drugs to use for different patients, depending on their medical history, physiological details and genetic makeup. Researchers are already exploring this approach in precision medicine.

But common sense and the precautionary principle
suggest that it is too early for AI to prescribe drugs without human oversight. And the fact that mistakes may be baked into the technology could mean that where human health is at stake, human supervision will always be necessary.The Conversation

Carlos Gershenson, Professor of Innovation, Binghamton University, State University of New York

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Continue Reading

Ticker Views

US security shift deepens Ukraine’s crisis and Europe’s dilemma

Published

on

New US national security strategy adds to Ukraine’s woes and exacerbates Europe’s dilemmas

Stefan Wolff, University of Birmingham and Tetyana Malyarenko, National University Odesa Law Academy

Ukraine is under unprecedented pressure, not only on the battlefield but also on the domestic and diplomatic fronts.

Each of these challenges on their own would be difficult to handle for any government. But together – and given there is no obvious solution to any of the problems the country is facing – they create a near-perfect storm.

It’s a storm that threatens to bring down Ukrainian president Volodymyr Zelensky’s government and deal a severe blow to Ukraine’s western allies.

On the frontlines in eastern Donbas, Ukraine has continued to lose territory since Russia’s summer offensive began in May 2025. The ground lost has been small in terms of area but significant in terms of the human and material cost.

Between them, Russia and Ukraine have suffered around 2 million casualties over the course of the war.

Perhaps more importantly, the people of Ukraine have endured months and months during which the best news has been that its troops were still holding out despite relentless Russian assaults. This relentless negativity has undermined morale among troops and civilians alike.

As a consequence, recruitment of new soldiers cannot keep pace with losses incurred on the frontlines – both in terms of casualties and desertions.

Moreover, potential conscripts to the Ukrainian army increasingly resort to violence to avoid being drafted into the military. A new recruitment drive, announced by the Ukrainian commander-in-chief, Oleksandr Syrsky, will increase the potential for further unrest.

Russia’s air campaign against Ukraine’s critical infrastructure continues unabated, further damaging what is left of the vital energy grid and leaving millions of families facing lengthy daily blackouts.

The country’s air defence systems are increasingly overwhelmed by nightly Russian attacks, which are penetrating hitherto safe areas such as the capital and key population centres in south and west. It’s a grim outlook for Ukraine’s civilian population who are now heading into the war’s fourth winter. A ceasefire, let alone a viable peace agreement, remains a very distant prospect.

The political turmoil that has engulfed Zelensky and his government adds to the sense of a potentially catastrophic downward spiral. There have been corruption scandals before, but none has come as close to the president himself.

The amounts allegedly involved in the latest bribery scandal – around US$100m (£75 million) – are eye-watering at a time of national emergency. But it is also the callousness of Ukraine’s elites apparently enriching themselves that adds insult to injury.

The latest scandal has also opened a potential Pandora’s box of vicious recriminations. As more and more members of Zelensky’s inner circle are engulfed in corruption allegations, more details of how different parts of his administration benefited from various schemes or simply turned a blind eye are likely to emerge.

This has damaged Zelensky’s own standing with his citizens and allies. What has helped him survive are both his track record as a war leader so far and the lack of alternatives.

Without a clear pathway towards a smooth transition to a new leadership in Ukraine, the mutual dependency between Zelensky and his European allies has grown.

Whose side is the US on anyway?

The US under Donald Trump is no longer, and perhaps never has been, a dependable ally for Ukraine. What is worse, however, is that America has also ceased to be a dependable ally for Europe.

America’s new national security strategy, published last week, has exploded into this already precarious situation and has sent shockwaves across the whole of Europe. It casts the European Union as more of a threat to US interests than Russia.

It also threatens open interference in the domestic affairs of its erstwhile European allies. And crucially for Kyiv, it outlines a trajectory towards American disengagement from European security.

This adds to Ukraine’s problems – not only because Washington cannot be seen as an honest broker in negotiations with Moscow. It also decreases the value of any western security guarantees. In the absence of a US backstop, the primarily European coalition of the willing lacks the capacity, for now, to establish credible deterrence against future Russian adventurism.

ISW map showing the state of the conflict in Ukraine, December 7 2025.
The state of the conflict in Ukraine, December 7 2025.
Institute for the Study of War

Efforts by the coalition of the willing cannot hide the fact that a fractured European Union whose key member states, like France and Germany, have fragile governments that are challenged by openly pro-Trump and pro-Putin populists, is unlikely to step quickly into the assurance gap left by the US. The twin challenge of investing in their own defensive capabilities while keeping Ukraine in the fight against Russia to buy the essential time needed to do so creates a profound dilemma.

Can Europe and Ukraine go it alone?

Without the US, Ukraine’s allies simply do not have the resources to enable Ukraine to even improve its negotiation position, let alone to win this war. In a worst-case scenario, all they may be able to accomplish is delaying a Ukrainian defeat.

But this may still be better than a peace deal that would require enormous resources for Ukraine’s reconstruction, while giving Russia an opportunity to regroup, rebuild and rearm for Putin’s next steps towards an even greater Russian sphere of influence in Europe.

At this moment, neither Zelensky nor his European allies can therefore have any interest in a peace deal negotiated between Trump and Putin.

A resignation by Zelensky or his government is unlikely to improve the situation. On the contrary, it is likely to add to Ukraine’s problems. Any new government would be subject to the most intense pressure to accept an imposed deal that Trump and Putin may be conspiring to strike.

Eventually, this war will end, and it will almost certainly require painful concessions from Ukraine. For Europe, the time until then needs to be used to develop a credible plan for stabilising Ukraine, deterring Russia and learning to live and survive without the transatlantic alliance.

The challenge for Europe is to do all three things simultaneously. The danger for Zelensky is that – for Europe – deterring Russia and appeasing the US become existential priorities in themselves and that he and Ukraine could end up as bargaining chips in a bigger game.The Conversation

Stefan Wolff, Professor of International Security, University of Birmingham and Tetyana Malyarenko, Professor of International Security, Jean Monnet Professor of European Security, National University Odesa Law Academy

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Continue Reading

Trending Now