Connect with us
https://tickernews.co/wp-content/uploads/2023/10/AmEx-Thought-Leaders.jpg

Tech

AI is now part of our world. Uni graduates should know how to use it responsibly

Published

on

Artificial intelligence is rapidly becoming an everyday part of lives. Many of us use it without even realising, whether it be writing emails, finding a new TV show or managing smart devices in our homes.

MTStock Studio/ Getty Images

Rachel Fitzgerald, The University of Queensland and Caitlin Curtis, The University of Queensland

It is also increasingly used in many professional contexts – from helping with recruitment to supporting health diagnoses and monitoring students’ progress in school.

But apart from a handful of computing-focused and other STEM programs, most Australian university students do not receive formal tuition in how to use AI critically, ethically or responsibly.

Here’s why this is a problem and what we can do instead.

AI use in unis so far

A growing number of Australian universities now allow students to use AI in certain assessments, provided the use is appropriately acknowledged.

But this does not teach students how these tools work or what responsible use involves.

Using AI is not as simple as typing questions into a chat function. There are widely recognised ethical issues around its use including bias and misinformation. Understanding these is essential for students to use AI responsibly in their working lives.

So all students should graduate with a basic understanding of AI, its limitations, the role of human judgement and what responsible use looks like in their particular field.

We need students to be aware of bias in AI systems. This includes how their own biases could shape how they use the AI (the questions they ask and how they interpret its output), alongside an understanding of the broader ethical implications of AI use.

For example, does the data and the AI tool protect people’s privacy? Has the AI made a mistake? And if so, whose responsibility is that?

What about AI ethics?

The technical side of AI is covered in many STEM degrees. These degrees, along with philosophy and psychology disciplines, may also examine ethical questions around AI. But these issues are not a part of mainstream university education.

This is a concern. When future lawyers use predictive AI to draft contracts, or business graduates use AI for hiring or marketing, they will need skills in ethical reasoning.

Ethical issues in these scenarios could include unfair bias, like AI recommending candidates based on gender or race. It could include issues relating to a lack of transparency, such as not knowing how an AI system made a legal decision. Students need to be able to spot and question these risks before they cause harm.

In healthcare, AI tools are already supporting diagnosis, patient triage and treatment decisions.

As AI becomes increasingly embedded in professional life, the cost of uncritical use also scales up, from biased outcomes to real-world harm.

For example, if a teacher relies on AI carelessly to draft a lesson plan, students might learn a version of history that is biased or just plain wrong. A lawyer who over-relies on AI could submit a flawed court document, putting their client’s case at risk.

How can we do this?

There are international examples we can follow. The University of Texas at Austin and University of Edinburgh both offer programs in ethics and AI. However, both of these are currently targeted at graduate students. The University of Texas program is focused on teaching STEM students about AI ethics, whereas the University of Edinburgh’s program has a broader, interdiscplinary focus.

Implementing AI ethics in Australian universities will require thoughtful curriculum reform. That means building interdisciplinary teaching teams that combine expertise from technology, law, ethics and the social sciences. It also means thinking seriously about how we engage students with this content through core modules, graduate capabilities or even mandatory training.

It will also require investment in academic staff development and new teaching resources that make these concepts accessible and relevant to different disciplines.

Government support is essential. Targeted grants, clear national policy direction, and nationally shared teaching resources could accelerate the shift. Policymakers could consider positioning universities as “ethical AI hubs”. This aligns with the government-commissioned 2024 Australian University Accord report, which called for building capacity to meet the demands of the digital era.

Today’s students are tomorrow’s decision-makers. If they don’t understand the risks of AI and its potential for error, bias or threats to privacy, we will all bear the consequences. Universities have a public responsibility to ensure graduates know how to use AI responsibly and understand why their choices matter.The Conversation

Rachel Fitzgerald, Associate Professor and Deputy Associate Dean (Academic), Faculty of Business, Economics and Law, The University of Queensland and Caitlin Curtis, Research Fellow, Centre for Policy Futures, The University of Queensland

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Continue Reading

Tech

OpenAI Unveils ChatGPT Atlas: The Future of Browsing?

Published

on

 

OpenAI has taken another giant leap forward with the launch of ChatGPT Atlas — an AI-powered web browser that could redefine how people search, explore, and interact online. Investors and competitors are watching closely as this new technology challenges the dominance of traditional browsers like Google Chrome.

With ChatGPT Atlas, users may soon experience a web that feels less like typing into a search box and more like conversing with an intelligent assistant. The integration of AI could make browsing faster, more intuitive, and more personalised than ever before — but it also raises serious questions about privacy and data use.

As AI becomes more deeply embedded in the digital world, ChatGPT Atlas could represent the next major step toward a fully AI-driven online experience. What does this mean for users — and for the tech giants trying to keep up?

Continue Reading

Tech

OpenAI limits deepfakes after Bryan Cranston’s concerns

OpenAI protects against deepfakes on Sora 2 after Bryan Cranston and SAG-AFTRA raise concerns over unauthorized AI-generated content

Published

on

OpenAI protects against deepfakes on Sora 2 after Bryan Cranston and SAG-AFTRA raise concerns over unauthorised AI-generated content

video
play-sharp-fill
In Short:
– OpenAI partners with Bryan Cranston and unions to combat deepfakes on its Sora app.
– The app now includes options for people to control their likenesses and voices.
OpenAI announced it will work with Bryan Cranston, SAG-AFTRA, and actor unions to combat deepfakes on its AI video app, Sora.Cranston voiced concerns after unauthorized AI-generated clips featuring his likeness emerged after Sora 2’s launch in late September. He showed gratitude to OpenAI for taking steps to safeguard actors’ rights to control their likenesses.

Banner

The partnership aims to enhance protections against unauthorized AI content. The Creative Artists Agency and United Talent Agency had previously criticized OpenAI, citing risks to their clients’ intellectual property.

Last week, OpenAI blocked disrespectful videos of Martin Luther King Jr. at the request of his estate, following similar pressures. Zelda Williams also requested the public refrain from sending her AI-generated clips of her late father, Robin Williams.

Policy Changes

Following tensions post-launch, CEO Sam Altman revised Sora’s policy to give rights holders greater control of their likenesses.

The app now allows individuals to opt-out, reflecting OpenAI’s commitment to respond quickly to concerns from performers.

OpenAI backs the NO FAKES Act, supporting legislation that aims to protect individuals from unauthorized AI-generated representations.

OpenAI is focused on ensuring performers’ rights are respected regarding the misuse of their voices and likenesses. Altman reiterated the company’s dedication to these protections.


Download the Ticker app

Continue Reading

Tech

Major apps down as AWS experiences global outage

AWS outage disrupts Fortnite, Snapchat and multiple services globally

Published

on

AWS outage disrupts Fortnite, Snapchat and multiple services globally

video
play-sharp-fill
In Short:
– AWS outage on Monday disrupted major apps like Fortnite, Snapchat, and affected several global companies.
– UK companies including Lloyds Bank and Vodafone reported issues due to the AWS outage.

Amazon’s AWS experienced a significant outage on Monday, impacting major apps including Fortnite and Snapchat. The disruption affected connectivity for numerous companies globally.AWS reported increased error rates and latencies across multiple services and is attempting to recover quickly.

Banner

The outage marks the first significant internet disruption since a previous incident last year that impacted essential technology systems globally. AWS offers on-demand computing and storage services and is vital for many websites and platforms.

Multiple companies reported disruptions, including AI startup Perplexity, cryptocurrency exchange Coinbase, and trading app Robinhood. Perplexity’s CEO confirmed on X that the outages were linked to AWS issues.

Amazon’s shopping site, Prime Video, and Alexa services also faced difficulties, according to Downdetector. Other affected platforms included popular gaming applications like Clash Royale and financial services such as Venmo and Chime.

Uber competitor Lyft’s app was reported down for numerous users in the U.S. Messaging platform Signal also acknowledged connection problems stemming from the AWS outage.

British Companies

In the UK, Lloyds Bank, Bank of Scotland, and telecom services provider Vodafone were notably affected. The HMRC’s website also encountered issues during this outage.

Elon Musk stated that his platform, X, remained operational despite the widespread disruptions.


Download the Ticker app

Continue Reading

Trending Now