Connect with us
https://tickernews.co/wp-content/uploads/2023/10/AmEx-Thought-Leaders.jpg

Tech

AI is now part of our world. Uni graduates should know how to use it responsibly

Published

on

Artificial intelligence is rapidly becoming an everyday part of lives. Many of us use it without even realising, whether it be writing emails, finding a new TV show or managing smart devices in our homes.

MTStock Studio/ Getty Images

Rachel Fitzgerald, The University of Queensland and Caitlin Curtis, The University of Queensland

It is also increasingly used in many professional contexts – from helping with recruitment to supporting health diagnoses and monitoring students’ progress in school.

But apart from a handful of computing-focused and other STEM programs, most Australian university students do not receive formal tuition in how to use AI critically, ethically or responsibly.

Here’s why this is a problem and what we can do instead.

AI use in unis so far

A growing number of Australian universities now allow students to use AI in certain assessments, provided the use is appropriately acknowledged.

But this does not teach students how these tools work or what responsible use involves.

Using AI is not as simple as typing questions into a chat function. There are widely recognised ethical issues around its use including bias and misinformation. Understanding these is essential for students to use AI responsibly in their working lives.

So all students should graduate with a basic understanding of AI, its limitations, the role of human judgement and what responsible use looks like in their particular field.

We need students to be aware of bias in AI systems. This includes how their own biases could shape how they use the AI (the questions they ask and how they interpret its output), alongside an understanding of the broader ethical implications of AI use.

For example, does the data and the AI tool protect people’s privacy? Has the AI made a mistake? And if so, whose responsibility is that?

What about AI ethics?

The technical side of AI is covered in many STEM degrees. These degrees, along with philosophy and psychology disciplines, may also examine ethical questions around AI. But these issues are not a part of mainstream university education.

This is a concern. When future lawyers use predictive AI to draft contracts, or business graduates use AI for hiring or marketing, they will need skills in ethical reasoning.

Ethical issues in these scenarios could include unfair bias, like AI recommending candidates based on gender or race. It could include issues relating to a lack of transparency, such as not knowing how an AI system made a legal decision. Students need to be able to spot and question these risks before they cause harm.

In healthcare, AI tools are already supporting diagnosis, patient triage and treatment decisions.

As AI becomes increasingly embedded in professional life, the cost of uncritical use also scales up, from biased outcomes to real-world harm.

For example, if a teacher relies on AI carelessly to draft a lesson plan, students might learn a version of history that is biased or just plain wrong. A lawyer who over-relies on AI could submit a flawed court document, putting their client’s case at risk.

How can we do this?

There are international examples we can follow. The University of Texas at Austin and University of Edinburgh both offer programs in ethics and AI. However, both of these are currently targeted at graduate students. The University of Texas program is focused on teaching STEM students about AI ethics, whereas the University of Edinburgh’s program has a broader, interdiscplinary focus.

Implementing AI ethics in Australian universities will require thoughtful curriculum reform. That means building interdisciplinary teaching teams that combine expertise from technology, law, ethics and the social sciences. It also means thinking seriously about how we engage students with this content through core modules, graduate capabilities or even mandatory training.

It will also require investment in academic staff development and new teaching resources that make these concepts accessible and relevant to different disciplines.

Government support is essential. Targeted grants, clear national policy direction, and nationally shared teaching resources could accelerate the shift. Policymakers could consider positioning universities as “ethical AI hubs”. This aligns with the government-commissioned 2024 Australian University Accord report, which called for building capacity to meet the demands of the digital era.

Today’s students are tomorrow’s decision-makers. If they don’t understand the risks of AI and its potential for error, bias or threats to privacy, we will all bear the consequences. Universities have a public responsibility to ensure graduates know how to use AI responsibly and understand why their choices matter.The Conversation

Rachel Fitzgerald, Associate Professor and Deputy Associate Dean (Academic), Faculty of Business, Economics and Law, The University of Queensland and Caitlin Curtis, Research Fellow, Centre for Policy Futures, The University of Queensland

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Continue Reading

Tech

OpenAI releases GPT-5.1 with enhanced conversational features

OpenAI launches GPT-5.1, enhancing ChatGPT with personality controls and improved conversational abilities for paid users

Published

on

OpenAI launches GPT-5.1, enhancing ChatGPT with personality controls and improved conversational abilities for paid users

video
play-sharp-fill
In Short:
– OpenAI launched GPT-5.1 with two models to improve ChatGPT’s conversation and user control.
– The update, initially for paid users, addresses prior complaints and introduces adaptive reasoning and personality presets.
OpenAI launched GPT-5.1 today, featuring two upgraded models aimed at enhancing ChatGPT’s conversational abilities and providing users better control over its personality.The update started rolling out to paid subscribers on November 12, introducing GPT-5.1 Instant and GPT-5.1 Thinking, both designed to address complaints regarding the original GPT-5 release in August.

GPT-5.1 Instant is said to be “warmer by default and more conversational,” with early testers noting its playfulness while remaining clear and useful.

Banner

The launch follows a backlash from users after GPT-5’s release, who criticized its “colder” tone and the removal of previous models like GPT-4o. OpenAI’s CEO, Sam Altman, admitted that discontinuing GPT-4o “was a mistake” and acknowledged the emotional attachment users had to specific models.

Adaptive Reasoning

GPT-5.1 Instant introduces adaptive reasoning, which helps it determine when to “think before responding” to complex questions.

This leads to marked improvements in mathematical and coding tasks. GPT-5.1 Thinking adjusts processing time based on the task, resulting in clearer explanations and improved ease of use for various tasks.

The new version includes six personality presets, allowing users to tailor interactions. OpenAI aims for the model to integrate cognitive and emotional intelligence effectively.

For now, the rollout is for paid users, with free access occurring soon. Both models will be available via API, and legacy models will remain accessible for three months.


Download the Ticker app

Continue Reading

Tech

Apple postpones iPhone Air sequel due to poor sales

Apple delays iPhone Air 2 indefinitely after lacklustre sales of first model

Published

on

Apple delays iPhone Air 2 indefinitely after lacklustre sales of first model

video
play-sharp-fill
In Short:
– Apple has postponed the iPhone Air’s launch due to poor sales of the current model.
– Production of the iPhone Air will stop, with Foxconn and Luxshare ceasing manufacturing by November and October respectively.
Apple has delayed the launch of its second-generation iPhone Air, which was scheduled for fall 2026, due to disappointing sales of the current model that debuted two months ago, as reported by The Information.Engineers and suppliers have been informed that the iPhone Air will be removed from the production schedule without a new release date.

The decision coincides with a significant reduction in the production of the existing model. Foxconn is expected to cease all manufacturing by the end of November, while Luxshare will stop production by the end of October.

Banner

Sales for the iPhone Air have not met Apple’s expectations since its launch in September. Foxconn has limited its production lines for the device, and future orders are projected to decrease significantly. A survey indicated nearly no demand for the iPhone Air, with consumers instead choosing the iPhone 17 and iPhone 17 Pro models.

Production Challenges

The underperformance of the iPhone Air continues a trend of failed attempts by Apple to add a fourth model to its lineup.

The iPhone mini was previously discontinued after poor sales, followed by the larger Plus models, which faced similar challenges.

Apple had intended to develop a lighter second-generation iPhone Air with improved specifications but may now reconsider its design approach. The company also has plans for a staggered launch of the iPhone 18 lineup set for 2026 and early 2027.


Download the Ticker app

Continue Reading

Tech

Tech giants’ $47 billion AI infrastructure deals announced

Tech giants commit $47.7 billion to AI deals as demand for computing power soars and market diverges

Published

on

Tech giants commit $47.7 billion to AI deals as demand for computing power soars and market diverges

video
play-sharp-fill
In Short:
– Wall Street started November mixed as AI deals boosted tech stocks, especially Amazon’s share price after a major agreement.
– OpenAI plans $1.4 trillion investment for computing resources, with Big Tech predicting over $250 billion AI infrastructure spending this year.
Wall Street began the month with mixed performances as major artificial intelligence deals influenced tech stocks positively, while broader market indices diverged.
Amazon’s shares rose over 5% following a significant $38 billion cloud services agreement with OpenAI, contributing to gains for the Nasdaq despite a decline in the Dow.The seven-year collaboration with Amazon Web Services marks OpenAI’s first major partnership with AWS, offering access to Nvidia graphics processing units essential for its AI expansion.

Amazon commented on the soaring demand for computing power resulting from rapid AI advancements, aiming for full capacity deployment by the end of 2026.

Banner

Microsoft also sealed a $9.7 billion agreement with IREN, highlighting the industry’s insatiable need for cloud capacity.

The collaborations depict Big Tech’s ongoing commitment to AI infrastructure, with significant investments aimed at catering to the escalating demand for computing resources.

Investment Perspective

OpenAI CEO Sam Altman revealed intentions to invest $1.4 trillion to create 30 gigawatts of computing resources.

Major players, including Microsoft, Alphabet, Amazon, and Meta, have adjusted their capital expenditure forecasts for 2025, anticipating AI infrastructure spending to surpass $250 billion this year.

Despite market caution regarding inflated valuations, analysts remain optimistic about growth in the sector. Even amidst fears of an AI bubble, industry leaders assert ongoing investments will continue to bolster market performance through 2026.


Download the Ticker app

Continue Reading

Trending Now