Connect with us
https://tickernews.co/wp-content/uploads/2023/10/AmEx-Thought-Leaders.jpg

Tech

AI is now part of our world. Uni graduates should know how to use it responsibly

Published

on

Artificial intelligence is rapidly becoming an everyday part of lives. Many of us use it without even realising, whether it be writing emails, finding a new TV show or managing smart devices in our homes.

MTStock Studio/ Getty Images

Rachel Fitzgerald, The University of Queensland and Caitlin Curtis, The University of Queensland

It is also increasingly used in many professional contexts – from helping with recruitment to supporting health diagnoses and monitoring students’ progress in school.

But apart from a handful of computing-focused and other STEM programs, most Australian university students do not receive formal tuition in how to use AI critically, ethically or responsibly.

Here’s why this is a problem and what we can do instead.

AI use in unis so far

A growing number of Australian universities now allow students to use AI in certain assessments, provided the use is appropriately acknowledged.

But this does not teach students how these tools work or what responsible use involves.

Using AI is not as simple as typing questions into a chat function. There are widely recognised ethical issues around its use including bias and misinformation. Understanding these is essential for students to use AI responsibly in their working lives.

So all students should graduate with a basic understanding of AI, its limitations, the role of human judgement and what responsible use looks like in their particular field.

We need students to be aware of bias in AI systems. This includes how their own biases could shape how they use the AI (the questions they ask and how they interpret its output), alongside an understanding of the broader ethical implications of AI use.

For example, does the data and the AI tool protect people’s privacy? Has the AI made a mistake? And if so, whose responsibility is that?

What about AI ethics?

The technical side of AI is covered in many STEM degrees. These degrees, along with philosophy and psychology disciplines, may also examine ethical questions around AI. But these issues are not a part of mainstream university education.

This is a concern. When future lawyers use predictive AI to draft contracts, or business graduates use AI for hiring or marketing, they will need skills in ethical reasoning.

Ethical issues in these scenarios could include unfair bias, like AI recommending candidates based on gender or race. It could include issues relating to a lack of transparency, such as not knowing how an AI system made a legal decision. Students need to be able to spot and question these risks before they cause harm.

In healthcare, AI tools are already supporting diagnosis, patient triage and treatment decisions.

As AI becomes increasingly embedded in professional life, the cost of uncritical use also scales up, from biased outcomes to real-world harm.

For example, if a teacher relies on AI carelessly to draft a lesson plan, students might learn a version of history that is biased or just plain wrong. A lawyer who over-relies on AI could submit a flawed court document, putting their client’s case at risk.

How can we do this?

There are international examples we can follow. The University of Texas at Austin and University of Edinburgh both offer programs in ethics and AI. However, both of these are currently targeted at graduate students. The University of Texas program is focused on teaching STEM students about AI ethics, whereas the University of Edinburgh’s program has a broader, interdiscplinary focus.

Implementing AI ethics in Australian universities will require thoughtful curriculum reform. That means building interdisciplinary teaching teams that combine expertise from technology, law, ethics and the social sciences. It also means thinking seriously about how we engage students with this content through core modules, graduate capabilities or even mandatory training.

It will also require investment in academic staff development and new teaching resources that make these concepts accessible and relevant to different disciplines.

Government support is essential. Targeted grants, clear national policy direction, and nationally shared teaching resources could accelerate the shift. Policymakers could consider positioning universities as “ethical AI hubs”. This aligns with the government-commissioned 2024 Australian University Accord report, which called for building capacity to meet the demands of the digital era.

Today’s students are tomorrow’s decision-makers. If they don’t understand the risks of AI and its potential for error, bias or threats to privacy, we will all bear the consequences. Universities have a public responsibility to ensure graduates know how to use AI responsibly and understand why their choices matter.The Conversation

Rachel Fitzgerald, Associate Professor and Deputy Associate Dean (Academic), Faculty of Business, Economics and Law, The University of Queensland and Caitlin Curtis, Research Fellow, Centre for Policy Futures, The University of Queensland

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Tech

Airbus A320 fleet faces software upgrade due to risk

Airbus alerts A320 operators to urgent software fix after JetBlue incident raises safety concerns

Published

on

Airbus alerts A320 operators to urgent software fix after JetBlue incident raises safety concerns

video
play-sharp-fill
In Short:
– Airbus warns over half of A320 fleet needs software fixes due to potential data corruption risks.
– Affected airlines must complete upgrades before next flights, with operational disruptions anticipated during a busy travel season.

Airbus has issued a warning regarding its A320 fleet, indicating that over half of the active jets will require a software fix.

It follows a recent incident involving a JetBlue Airways aircraft, where “intense solar radiation” was found to potentially corrupt data crucial for flight control system operation.

The European plane manufacturer stated that around 6,500 jets may be affected. A regulation mandates that the software upgrade must occur before the next scheduled flight.

Banner

Operational disruptions for both passengers and airlines are anticipated. The issue arose from an incident on October 30, where a JetBlue flight experienced a computer malfunction that resulted in an uncommanded descent. Fortunately, no injuries occurred, but the malfunction of an automated computer system was identified as a contributing factor.

Airlines, including American Airlines Group, have begun to implement the required upgrades.

The majority of affected jets can receive an uncomplicated software update, although around 1,000 older models will necessitate an actual hardware upgrade, requiring grounding during maintenance.

Hungarian airline Wizz Air has also initiated necessary maintenance for compliance, potentially affecting flights. This announcement has surfaced during a busy travel season in the US, with many facing delays due to other factors as well.

Regulatory Response

The European Union Aviation Safety Agency has mandated that A320 operators replace or modify specific elevator-aileron computers. The directive follows the JetBlue incident, where a malfunction led to a temporary loss of altitude.

Airbus’s fix applies to both the A320 and A320neo models, representing a vital response in ensuring aircraft safety.


Download the Ticker app

Continue Reading

Tech

China blocks ByteDance from using Nvidia chips in new data centres

China blocks ByteDance from using Nvidia chips, tightening tech control and pushing for domestic AI innovation amid U.S. restrictions.

Published

on

China blocks ByteDance from using Nvidia chips, tightening tech control and pushing for domestic AI innovation amid U.S. restrictions.


Chinese regulators have moved to block ByteDance from deploying Nvidia chips in newly built data centres, tightening control over foreign technology used by major Chinese tech giants. The decision comes after ByteDance made substantial purchases of Nvidia hardware amid fears of shrinking supply from the United States.

Washington has already restricted the sale of advanced chips to China, allowing only weakened versions into the market. Beijing’s latest move reflects its push to reduce dependence on U.S. technology and accelerate home-grown AI innovation.

The ban places operational and financial pressure on ByteDance, which must now work around a growing pile of Nvidia chips it is no longer allowed to use. Domestic suppliers like Huawei are expected to step in as China intensifies its pursuit of tech self-reliance.
Subscribe to never miss an episode of Ticker – https://www.youtube.com/@weareticker

#ChinaTech #ByteDance #Nvidia #AIIndustry #USChinaTech #ChipRestrictions #Huawei #TechPolicy


Download the Ticker app

Continue Reading

Tech

OpenAI launches shopping research tool for ChatGPT users

OpenAI launches shopping research tool to enhance e-commerce experience ahead of holiday season spending boost

Published

on

OpenAI launches shopping research tool to enhance e-commerce experience ahead of holiday season spending boost

video
play-sharp-fill
In Short:
– OpenAI’s “shopping research” tool helps users find detailed shopping guides tailored to their preferences.
– Users can access Instant Checkout for purchases while ensuring user chats are not shared with retailers.
OpenAI has launched a new tool called “shopping research,” coinciding with an increase in consumer spending ahead of the holiday season.This tool is aimed at ChatGPT users seeking comprehensive shopping guides that detail top products, key differences, and the latest retailer information.

Users can customise their guides based on budget, features, and recipients. OpenAI notes that while the tool takes a few minutes to generate responses, users can still use ChatGPT for quicker queries like price checks.

Banner

When users ask specific prompts, such as finding a quiet cordless stick vacuum or a gift for a niece who loves art, the shopping research tool will appear automatically. It can also be accessed via the menu.

Shopping Research

OpenAI has been expanding its e-commerce capabilities, with the introduction of the Instant Checkout feature in September, enabling purchases directly through ChatGPT.

Soon, users of the shopping research tool will also be able to use Instant Checkout for making purchases.

OpenAI assures that shopping research results are derived from publicly available retail websites and will not disclose user chats to retailers, although it does warn that inaccuracies may occur in product availability and pricing.

Shopping research is now available to OpenAI’s Free, Go, Plus, and Pro users logged into ChatGPT.


Download the Ticker app

Continue Reading

Trending Now