Jake Goldenfein, The University of Melbourne; Christine Parker, The University of Melbourne, and Kimberlee Weatherall, University of Sydney
Today, the Albanese Labor government released the long-awaited National AI Plan, “a whole-of-government framework that ensures technology works for people, not the other way around”.
With this plan, the government promises an inclusive artificial intelligence (AI) economy that protects workers, fills service gaps, and supports local AI development.
In a major reversal, it also confirms Australia won’t implement mandatory guardrails for high-risk AI. Instead, it argues our existing legal regime is sufficient, and any minor changes for specific AI harms or risk can be managed with help from a new A$30 million AI Safety Institute within the Department of Industry.
Avoiding big changes to Australia’s legal system makes sense in light of the plan’s primary goal – making Australia an attractive location for international data centre investment.
The initial caution is gone
After the public release of ChatGPT in November 2022 ushered in a generative AI boom, initial responses focused on existential risks posed by AI.
Leading AI figures even called for a pause on all AI research. Governments outlined plans to regulate.
But as investment in AI has grown, governments around the world have now shifted from caution to an AI race: embracing the opportunities while managing risks.
In 2023, the European Union created the world’s leading AI plan promoting the uptake of human-centric and trustworthy artificial intelligence. The United States launched its own, more bullish action plan in July 2025.
The new Australian plan prioritises creating a local AI software industry, spreading the benefit of AI “productivity gains” to workers and public service users, capturing some of the relentless global investment in AI data centres, and promoting Australia’s regional leadership by becoming an infrastructure and computing hub in the Indo-Pacific.
Those goals are outlined in the plan’s three pillars: capturing the opportunities, spreading the benefits, and keeping us safe.
What opportunities are we capturing?
The jury is still out on whether AI will actually boost productivity for all organisations and businesses that adopt it.
Regardless, global investment in AI infrastructure has been immense, with some predictions on global data centre investments reaching A$8 trillion by 2030 (so long as the bubble doesn’t burst before then).
Through the new AI plan, Australia wants to get in on the boom and become a location for US and global tech industry capital investment.
In the AI plan, the selling point for increased Australian data centre investment is the boost this would provide for our renewable energy transition. States are already competing for that investment. New South Wales has streamlined data centre approval processes, and Victoria is creating incentives to “ruthlessly” chase data centre investment in greenfield sites.
Under the new federal environmental law reforms passed last week, new data centre approvals may be fast-tracked if they are co-located with new renewable power, meaning less time to consider biodiversity and other environmental impacts.
But data centres are also controversial. Concerns about the energy and water demands of large data centres in Australia are already growing.
The water use impacts of data centres are significant – and the plan is remarkably silent on this apart from promising “efficient liquid cooling”. So far, experience from Germany and the US shows data centres stretching energy grids beyond their limit.
It’s true data centre companies are likely to invest in renewable energy, but at the same time growth in data centre demands is currently justifying the continuation of fossil fuel use.
There’s some requirement for Australian agencies to consider the environmental sustainability of data centres hosting government services. But a robust plan for environmental assessment and reporting across public and private sectors is lacking.
Who will really benefit from AI?
The plan promises the economic and efficiency benefits of AI will be for everyone – workers, small and medium businesses, and those receiving government services.
Recent scandals suggest Australian businesses are keen to use AI to reduce labour costs without necessarily maintaining service quality. This has created anxiety around the impact of AI on labour markets and work conditions.
Australia’s AI plan tackles this through promoting worker development, training and re-skilling, rather than protecting existing conditions.
The Australian union movement will need to be active to make the “AI-ready workers” narrative a reality, and to protect workers from AI being used to reduce labour costs, increase surveillance, and speed up work.
The plan also mentions improving public service efficiency. Whether or not those efficiency gains are possible is hard to say. However, the plan does recognise we’ll need comprehensive investment to unlock the value of private data holdings and public public data holdings useful for AI.
Will we be safe enough?
With the release of the plan, the government has officially abandoned last year’s proposals for mandatory guardrails for high-risk AI systems. It claims Australia’s existing legal frameworks are already strong, and can be updated “case by case”.
As we’ve pointed out previously, this is out of step with public opinion. More than 75% of Australians want AI regulation.
It’s also out of step with other countries. The European Union already prohibits the most risky AI systems, and has updated product safety and platform regulations. It’s also currently refining a framework for regulating high-risk AI systems. Canadian federal government systems are regulated by a tiered risk management system. South Korea, Japan, Brazil and China all have rules that govern AI-specific risks.
Australia’s claim to have a strong, adequate and stable legal framework would be much more credible if the document included a plan for, or clarity about our significant law reform backlog. This backlog includes privacy rights, consumer protection, automated decision-making in government post-Robodebt, as well as copyright and digital duty of care.
Ultimately the National AI Plan says some good things about sustainability, sharing the benefits, and keeping Australians safe even as the government makes a pitch for data centre investment and becoming an AI hub for the region.
Compared with those of some other nations, the plan is short on specificity. The test will lie in whether the government gives substance to its goals and promises, instead of just chasing the short-term AI investment dollar.![]()
![]()
Jake Goldenfein, Senior Lecturer, Law and Technology, The University of Melbourne; Christine Parker, Professor of Law, The University of Melbourne, and Kimberlee Weatherall, Professor of Law, University of Sydney
This article is republished from The Conversation under a Creative Commons license. Read the original article.






















