Connect with us
https://tickernews.co/wp-content/uploads/2023/10/AmEx-Thought-Leaders.jpg

Ticker Views

These jobs will thrive but others may vanish as AI transforms Australia’s workforce

Published

on

These jobs will thrive – but others may vanish – as AI transforms Australia’s workforce

Janine Dixon, Victoria University and James Lennox, Victoria University

The Commonwealth Bank of Australia made headlines when it announced last week it would cut 45 call centre jobs, thanks to the introduction of an AI chatbot.

This only added fuel to ongoing speculation – and some alarmism – about how artificial intelligence (AI) is going to transform the world of work in Australia.

But this revolution isn’t a simple story of “robots” coming and taking everyone’s jobs. In some industries, they’re already helping people do parts of their jobs better and faster.

Junior lawyers are using AI tools to help with some of the more mundane tasks they are often assigned. Recruiters are already widely using AI tools to screen CVs and help with hiring decisions – despite concerns about possible inadvertent bias.

So where is this all going?

We used a model of the Australian economy and built on existing research by the International Labour Organization. We simulated two future versions of Australia through to 2050: one in which businesses and government adopt AI extensively, and one in which there is no AI – that is, a future that looks rather like today.

Comparing these two futures helps us understand what we might gain and lose from this new technology.

A very different future

AI is a very disruptive technology, meaning a future with it looks pretty different to a future without it.

To help forecast where we might be headed, the International Labour Organization has produced a detailed set of “exposure indices” for more than 400 different occupations. These indicate the extent to which human input to each occupation will be displaced or augmented by AI.

The most exposed occupation is data entry clerk, for which the International Labour Organization estimates 70% of the tasks currently done by humans could be done or improved by AI. Bricklayers and dental assistants, at the other end of the scale, are among jobs classified as “not exposed”.

What this means for Australia

To perform our simulation, we mapped these occupation categories onto the Australian context. The International Labour Organization indices indicate 32% of jobs in Australia could be done by AI. But this doesn’t mean that 32% of people will lose their jobs overnight.

It will take time for AI capabilities to be installed, giving people time to train for alternative careers. Much of the impact is likely to be years away, meaning that school-leavers can make different choices and prepare for an AI world.

Many studies, including the Productivity Commission’s interim report on AI, find AI will drive faster economic growth. In a faster-growing economy, more people will work as teachers, hairdressers, and carers, because AI isn’t expected to be as useful in those roles.

This faster-growing economy will also require more school buildings, hair salons and care homes.

As a result, some of the occupations with the largest expansions will be in the construction and building services sectors. Cleaners, construction labourers, carpenters and bricklayers will all have big roles to play in an AI future.

Managing the transition

Our simulation shows that during the transition period where employers gradually adopt AI, the unemployment rate will be higher than normal, as workers and investors will be seeking new jobs or opportunities. But there is scope for governments to act to minimise the disruption.

First, they can prepare people for careers in occupations that will grow strongly, such as those at the top of our chart.

Second, government can facilitate early, jobs-focused investment in industries less exposed to AI, particularly those that require lots of interpersonal input.

For example, investment in a world with fewer business analysts and more hospitality workers should be targeted at hotels and hospitality venues, rather than office space.

And third, AI will drive economic growth and tax revenues. This creates an opportunity for the government – a major employer – to create and fill more jobs in support of a safe and healthy society, such as drug and alcohol services, child protection case workers, and teachers’ aides.

Bringing everyone along

Although we find that the economy will grow faster in an AI world, there’s no guarantee this growth will include everyone.

Overall, our simulation paints a picture of a larger and better resourced economy, showing us that total employment won’t change a lot, but employment in some occupations will be much larger or smaller than it would be in a non-AI future.

But our simulation also suggests growth in profits will be stronger than growth in wages. Governments will need to keep a close eye on wage growth and equality, and may need to address emerging issues through tax policy, competition policy and industrial relations.The Conversation

Janine Dixon, Director, Centre of Policy Studies, Victoria University and James Lennox, Senior Research Fellow, Centre of Policy Studies (CoPS), Victoria University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Ticker Views

From the Goldberg’s to the Icebergs – Bondi is Australia, Australia is Bondi beach

Published

on

When I think of Australia, I often think of Bondi Beach. Not just its great natural beauty, its hip cosmopolitan but casual feel but also because of its importance in my own family history.

My grandfather, Ken Harcourt, a Jew from Lismore, born of Romanian and Polish refugee parents, grew up in Bondi and he and his brother Sam spent most mornings in the surf and most afternoons at the track at Randwick. Ken, originally named Kopel Harkowitz, was the son of immigrants from Transylvania (which is sometimes considered Romania, sometimes Hungary but if I am talking to Frank Lowy, it’s definitely, Hungary) and Poland. Kopel’s mother Dinah Harkowitz always wanted her eldest son to be a Rabbi, but young Kopel wanted to be a true blue Aussie lifesaver at Bondi. He and his brother had trouble getting in the club as Kopel Harkowitz but when he fronted as Ken Harcourt, they said ‘no worries’. When I asked my grandfather why he changed his name, he used to say ‘Well, I didn’t really, I just went from the Goldberg’s to the Ice Bergs’. Sam became a Harcourt too and they became professional punters and even had a radio show named after them called ‘The Racing Harcourts’.

So that’s why I am a Harcourt, and why Bondi Beach means so much to me. In fact, in my first published book Beyond Our Shores deliberately chose the Bondi beach as its cover. I thought it symbolic that a son of Eastern European migrants from way beyond our shores aspiring to be a true blue Aussie lifesaver at Bondi. In fact, a major theme of the book has been how important Australia\’s immigration has been to our export performance and our national economic prosperity. Waves of English, Irish, Scottish, Greek, Jewish, Russian, Chinese, Lebanese, New Zealand and Indonesian migrants have all done their bit too grow Australia’s links with the world. Many of them have become lifesavers too! This is so special to Sydney and nowhere is this so apparent than here at Bondi with its great mix of languages and cultures.

Weakness of leadership

But what Kopel Harkowitz would make of Bondi Beach on December 14th, 2025? Like most decent Australians he would have been shocked at the explosion of anti-Semitism and the weakness of modern Australian political leadership. My grandfather was a proud Australian as well as a proud Jew and thought Australia was the safest and democratic country in the world. And he loved Christmas, Anzac Day and all the celebrations and thought religion like voting was a private matter not to be imposed on others.

I am sure he would have been shocked at the chants at the Sydney Opera House just after 7th October 2023, and the weak federal government response. He would have been shocked at people marching across the Sydney Harbour Bridge chanting ‘globalise the intifada’ and the intimidation of the Jewish community with people travelling to Bondi every weekend to wave flags and chant slogans. He would have feared for the safety of Jewish staff and students on Australian university campuses and I suspect he would have been amazed at what was broadcast on his beloved ABC.

And he would have been right. The intimidation since the Sydney Opera House chants up to the shootings at Bondi must stop. It’s not ‘balancing Islamophobia with anti-Semitism’ which the federal government seems to think it is, it’s all of the Australian community against the fundamentalist Islamicist terrorists. The attack on the Jewish Hannukah celebration at Bondi was an attack on all of us. Bondi is Australia, Australia is Bondi beach. It’s symbolic that the hero of the day was an Aussie fruit and veg shop owner (himself of Lebanese Muslim origin) who tackled the terrorist gun man and in doing so saved many lives. Responsible Muslim nations like Morocco and UAE take a hard line against terrorists, so should the west, starting with Australia.

My grandfather knew that, and that’s why he loved Australia. May his memory be a blessing.

Continue Reading

Ticker Views

AI’s errors may be impossible to eliminate – what that means for its use in health care

Published

on

AI’s errors may be impossible to eliminate – what that means for its use in health care

Federal legislation introduced in early 2025 proposed allowing AI to prescribe medication.
Wladimir Bulgar/Science Photo Library via Getty Images

Carlos Gershenson, Binghamton University, State University of New York

In the past decade, AI’s success has led to uncurbed enthusiasm and bold claims – even though users frequently experience errors that AI makes. An AI-powered digital assistant can misunderstand someone’s speech in embarrassing ways, a chatbot could hallucinate facts, or, as I experienced, an AI-based navigation tool might even guide drivers through a corn field – all without registering the errors.

People tolerate these mistakes because the technology makes certain tasks more efficient. Increasingly, however, proponents are advocating the use of AI – sometimes with limited human supervision – in fields where mistakes have high cost, such as health care. For example, a bill introduced in the U.S. House of Representatives in early 2025 would allow AI systems to prescribe medications autonomously. Health researchers as well as lawmakers since then have debated whether such prescribing would be feasible or advisable.

How exactly such prescribing would work if this or similar legislation passes remains to be seen. But it raises the stakes for how many errors AI developers can allow their tools to make and what the consequences would be if those tools led to negative outcomes – even patient deaths.

As a researcher studying complex systems, I investigate how different components of a system interact to produce unpredictable outcomes. Part of my work focuses on exploring the limits of science – and, more specifically, of AI.

Over the past 25 years I have worked on projects including traffic light coordination, improving bureaucracies and tax evasion detection. Even when these systems can be highly effective, they are never perfect.

For AI in particular, errors might be an inescapable consequence of how the systems work. My lab’s research suggests that particular properties of the data used to train AI models play a role. This is unlikely to change, regardless of how much time, effort and funding researchers direct at improving AI models.

Nobody – and nothing, not even AI – is perfect

As Alan Turing, considered the father of computer science, once said: “If a machine is expected to be infallible, it cannot also be intelligent.” This is because learning is an essential part of intelligence, and people usually learn from mistakes. I see this tug-of-war between intelligence and infallibility at play in my research.

In a study published in July 2025, my colleagues and I showed that perfectly organizing certain datasets into clear categories may be impossible. In other words, there may be a minimum amount of errors that a given dataset produces, simply because of the fact that elements of many categories overlap. For some datasets – the core underpinning of many AI systems – AI will not perform better than chance.

A portrait of seven dogs of different breeds.
Features of different dog breeds may overlap, making it hard for some AI models to differentiate them.
MirasWonderland/iStock via Getty Images Plus

For example, a model trained on a dataset of millions of dogs that logs only their age, weight and height will probably distinguish Chihuahuas from Great Danes with perfect accuracy. But it may make mistakes in telling apart an Alaskan malamute and a Doberman pinscher, since different individuals of different species might fall within the same age, weight and height ranges.

This categorizing is called classifiability, and my students and I started studying it in 2021. Using data from more than half a million students who attended the Universidad Nacional Autónoma de México between 2008 and 2020, we wanted to solve a seemingly simple problem. Could we use an AI algorithm to predict which students would finish their university degrees on time – that is, within three, four or five years of starting their studies, depending on the major?

We tested several popular algorithms that are used for classification in AI and also developed our own. No algorithm was perfect; the best ones − even one we developed specifically for this task − achieved an accuracy rate of about 80%, meaning that at least 1 in 5 students were misclassified. We realized that many students were identical in terms of grades, age, gender, socioeconomic status and other features – yet some would finish on time, and some would not. Under these circumstances, no algorithm would be able to make perfect predictions.

You might think that more data would improve predictability, but this usually comes with diminishing returns. This means that, for example, for each increase in accuracy of 1%, you might need 100 times the data. Thus, we would never have enough students to significantly improve our model’s performance.

Additionally, many unpredictable turns in lives of students and their families – unemployment, death, pregnancy – might occur after their first year at university, likely affecting whether they finish on time. So even with an infinite number of students, our predictions would still give errors.

The limits of prediction

To put it more generally, what limits prediction is complexity. The word complexity comes from the Latin plexus, which means intertwined. The components that make up a complex system are intertwined, and it’s the interactions between them that determine what happens to them and how they behave.

Thus, studying elements of the system in isolation would probably yield misleading insights about them – as well as about the system as a whole.

Take, for example, a car traveling in a city. Knowing the speed at which it drives, it’s theoretically possible to predict where it will end up at a particular time. But in real traffic, its speed will depend on interactions with other vehicles on the road. Since the details of these interactions emerge in the moment and cannot be known in advance, precisely predicting what happens to the the car is possible only a few minutes into the future.

AI is already playing an enormous role in health care.

Not with my health

These same principles apply to prescribing medications. Different conditions and diseases can have the same symptoms, and people with the same condition or disease may exhibit different symptoms. For example, fever can be caused by a respiratory illness or a digestive one. And a cold might cause cough, but not always.

This means that health care datasets have significant overlaps that would prevent AI from being error-free.

Certainly, humans also make errors. But when AI misdiagnoses a patient, as it surely will, the situation falls into a legal limbo. It’s not clear who or what would be responsible if a patient were hurt. Pharmaceutical companies? Software developers? Insurance agencies? Pharmacies?

In many contexts, neither humans nor machines are the best option for a given task. “Centaurs,” or “hybrid intelligence” – that is, a combination of humans and machines – tend to be better than each on their own. A doctor could certainly use AI to decide potential drugs to use for different patients, depending on their medical history, physiological details and genetic makeup. Researchers are already exploring this approach in precision medicine.

But common sense and the precautionary principle
suggest that it is too early for AI to prescribe drugs without human oversight. And the fact that mistakes may be baked into the technology could mean that where human health is at stake, human supervision will always be necessary.The Conversation

Carlos Gershenson, Professor of Innovation, Binghamton University, State University of New York

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Continue Reading

Ticker Views

US security shift deepens Ukraine’s crisis and Europe’s dilemma

Published

on

New US national security strategy adds to Ukraine’s woes and exacerbates Europe’s dilemmas

Stefan Wolff, University of Birmingham and Tetyana Malyarenko, National University Odesa Law Academy

Ukraine is under unprecedented pressure, not only on the battlefield but also on the domestic and diplomatic fronts.

Each of these challenges on their own would be difficult to handle for any government. But together – and given there is no obvious solution to any of the problems the country is facing – they create a near-perfect storm.

It’s a storm that threatens to bring down Ukrainian president Volodymyr Zelensky’s government and deal a severe blow to Ukraine’s western allies.

On the frontlines in eastern Donbas, Ukraine has continued to lose territory since Russia’s summer offensive began in May 2025. The ground lost has been small in terms of area but significant in terms of the human and material cost.

Between them, Russia and Ukraine have suffered around 2 million casualties over the course of the war.

Perhaps more importantly, the people of Ukraine have endured months and months during which the best news has been that its troops were still holding out despite relentless Russian assaults. This relentless negativity has undermined morale among troops and civilians alike.

As a consequence, recruitment of new soldiers cannot keep pace with losses incurred on the frontlines – both in terms of casualties and desertions.

Moreover, potential conscripts to the Ukrainian army increasingly resort to violence to avoid being drafted into the military. A new recruitment drive, announced by the Ukrainian commander-in-chief, Oleksandr Syrsky, will increase the potential for further unrest.

Russia’s air campaign against Ukraine’s critical infrastructure continues unabated, further damaging what is left of the vital energy grid and leaving millions of families facing lengthy daily blackouts.

The country’s air defence systems are increasingly overwhelmed by nightly Russian attacks, which are penetrating hitherto safe areas such as the capital and key population centres in south and west. It’s a grim outlook for Ukraine’s civilian population who are now heading into the war’s fourth winter. A ceasefire, let alone a viable peace agreement, remains a very distant prospect.

The political turmoil that has engulfed Zelensky and his government adds to the sense of a potentially catastrophic downward spiral. There have been corruption scandals before, but none has come as close to the president himself.

The amounts allegedly involved in the latest bribery scandal – around US$100m (£75 million) – are eye-watering at a time of national emergency. But it is also the callousness of Ukraine’s elites apparently enriching themselves that adds insult to injury.

The latest scandal has also opened a potential Pandora’s box of vicious recriminations. As more and more members of Zelensky’s inner circle are engulfed in corruption allegations, more details of how different parts of his administration benefited from various schemes or simply turned a blind eye are likely to emerge.

This has damaged Zelensky’s own standing with his citizens and allies. What has helped him survive are both his track record as a war leader so far and the lack of alternatives.

Without a clear pathway towards a smooth transition to a new leadership in Ukraine, the mutual dependency between Zelensky and his European allies has grown.

Whose side is the US on anyway?

The US under Donald Trump is no longer, and perhaps never has been, a dependable ally for Ukraine. What is worse, however, is that America has also ceased to be a dependable ally for Europe.

America’s new national security strategy, published last week, has exploded into this already precarious situation and has sent shockwaves across the whole of Europe. It casts the European Union as more of a threat to US interests than Russia.

It also threatens open interference in the domestic affairs of its erstwhile European allies. And crucially for Kyiv, it outlines a trajectory towards American disengagement from European security.

This adds to Ukraine’s problems – not only because Washington cannot be seen as an honest broker in negotiations with Moscow. It also decreases the value of any western security guarantees. In the absence of a US backstop, the primarily European coalition of the willing lacks the capacity, for now, to establish credible deterrence against future Russian adventurism.

ISW map showing the state of the conflict in Ukraine, December 7 2025.
The state of the conflict in Ukraine, December 7 2025.
Institute for the Study of War

Efforts by the coalition of the willing cannot hide the fact that a fractured European Union whose key member states, like France and Germany, have fragile governments that are challenged by openly pro-Trump and pro-Putin populists, is unlikely to step quickly into the assurance gap left by the US. The twin challenge of investing in their own defensive capabilities while keeping Ukraine in the fight against Russia to buy the essential time needed to do so creates a profound dilemma.

Can Europe and Ukraine go it alone?

Without the US, Ukraine’s allies simply do not have the resources to enable Ukraine to even improve its negotiation position, let alone to win this war. In a worst-case scenario, all they may be able to accomplish is delaying a Ukrainian defeat.

But this may still be better than a peace deal that would require enormous resources for Ukraine’s reconstruction, while giving Russia an opportunity to regroup, rebuild and rearm for Putin’s next steps towards an even greater Russian sphere of influence in Europe.

At this moment, neither Zelensky nor his European allies can therefore have any interest in a peace deal negotiated between Trump and Putin.

A resignation by Zelensky or his government is unlikely to improve the situation. On the contrary, it is likely to add to Ukraine’s problems. Any new government would be subject to the most intense pressure to accept an imposed deal that Trump and Putin may be conspiring to strike.

Eventually, this war will end, and it will almost certainly require painful concessions from Ukraine. For Europe, the time until then needs to be used to develop a credible plan for stabilising Ukraine, deterring Russia and learning to live and survive without the transatlantic alliance.

The challenge for Europe is to do all three things simultaneously. The danger for Zelensky is that – for Europe – deterring Russia and appeasing the US become existential priorities in themselves and that he and Ukraine could end up as bargaining chips in a bigger game.The Conversation

Stefan Wolff, Professor of International Security, University of Birmingham and Tetyana Malyarenko, Professor of International Security, Jean Monnet Professor of European Security, National University Odesa Law Academy

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Continue Reading

Trending Now