Connect with us
https://tickernews.co/wp-content/uploads/2023/10/AmEx-Thought-Leaders.jpg

Tech

What are AI hallucinations? Why AIs sometimes make things up

Published

on

When someone sees something that isn’t there, people often refer to the experience as a hallucination.

Hallucinations occur when your sensory perception does not correspond to external stimuli.

Technologies that rely on artificial intelligence can have hallucinations, too.

When an algorithmic system generates information that seems plausible but is actually inaccurate or misleading, computer scientists call it an AI hallucination. Researchers have found these behaviors in different types of AI systems, from chatbots such as ChatGPT to image generators such as Dall-E to autonomous vehicles. We are information science researchers who have studied hallucinations in AI speech recognition systems.

Wherever AI systems are used in daily life, their hallucinations can pose risks. Some may be minor – when a chatbot gives the wrong answer to a simple question, the user may end up ill-informed. But in other cases, the stakes are much higher. From courtrooms where AI software is used to make sentencing decisions to health insurance companies that use algorithms to determine a patient’s eligibility for coverage, AI hallucinations can have life-altering consequences. They can even be life-threatening: Autonomous vehicles use AI to detect obstacles, other vehicles and pedestrians.

Making it up

Hallucinations and their effects depend on the type of AI system. With large language models – the underlying technology of AI chatbots – hallucinations are pieces of information that sound convincing but are incorrect, made up or irrelevant. An AI chatbot might create a reference to a scientific article that doesn’t exist or provide a historical fact that is simply wrong, yet make it sound believable.

In a 2023 court case, for example, a New York attorney submitted a legal brief that he had written with the help of ChatGPT. A discerning judge later noticed that the brief cited a case that ChatGPT had made up. This could lead to different outcomes in courtrooms if humans were not able to detect the hallucinated piece of information.

With AI tools that can recognize objects in images, hallucinations occur when the AI generates captions that are not faithful to the provided image. Imagine asking a system to list objects in an image that only includes a woman from the chest up talking on a phone and receiving a response that says a woman talking on a phone while sitting on a bench. This inaccurate information could lead to different consequences in contexts where accuracy is critical.

What causes hallucinations

Engineers build AI systems by gathering massive amounts of data and feeding it into a computational system that detects patterns in the data. The system develops methods for responding to questions or performing tasks based on those patterns.

Supply an AI system with 1,000 photos of different breeds of dogs, labeled accordingly, and the system will soon learn to detect the difference between a poodle and a golden retriever. But feed it a photo of a blueberry muffin and, as machine learning researchers have shown, it may tell you that the muffin is a chihuahua.

two side-by-side four-by-four grids of images
Object recognition AIs can have trouble distinguishing between chihuahuas and blueberry muffins and between sheepdogs and mops.
Shenkman et al, CC BY

When a system doesn’t understand the question or the information that it is presented with, it may hallucinate. Hallucinations often occur when the model fills in gaps based on similar contexts from its training data, or when it is built using biased or incomplete training data. This leads to incorrect guesses, as in the case of the mislabeled blueberry muffin.

It’s important to distinguish between AI hallucinations and intentionally creative AI outputs. When an AI system is asked to be creative – like when writing a story or generating artistic images – its novel outputs are expected and desired. Hallucinations, on the other hand, occur when an AI system is asked to provide factual information or perform specific tasks but instead generates incorrect or misleading content while presenting it as accurate.

The key difference lies in the context and purpose: Creativity is appropriate for artistic tasks, while hallucinations are problematic when accuracy and reliability are required.

To address these issues, companies have suggested using high-quality training data and limiting AI responses to follow certain guidelines. Nevertheless, these issues may persist in popular AI tools.

Large language models hallucinate in several ways.

What’s at risk

The impact of an output such as calling a blueberry muffin a chihuahua may seem trivial, but consider the different kinds of technologies that use image recognition systems: An autonomous vehicle that fails to identify objects could lead to a fatal traffic accident. An autonomous military drone that misidentifies a target could put civilians’ lives in danger.

For AI tools that provide automatic speech recognition, hallucinations are AI transcriptions that include words or phrases that were never actually spoken. This is more likely to occur in noisy environments, where an AI system may end up adding new or irrelevant words in an attempt to decipher background noise such as a passing truck or a crying infant.

As these systems become more regularly integrated into health care, social service and legal settings, hallucinations in automatic speech recognition could lead to inaccurate clinical or legal outcomes that harm patients, criminal defendants or families in need of social support.

Check AI’s work

Regardless of AI companies’ efforts to mitigate hallucinations, users should stay vigilant and question AI outputs, especially when they are used in contexts that require precision and accuracy. Double-checking AI-generated information with trusted sources, consulting experts when necessary, and recognizing the limitations of these tools are essential steps for minimizing their risks.The Conversation

Anna Choi, Ph.D. Candidate in Information Science, Cornell University and Katelyn Mei, Ph.D. Student in Information Science, University of Washington

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Continue Reading

Tech

What Saudi Arabia’s role in the Electronic Arts buyout tells us about ‘game-washing’

Published

on

What Saudi Arabia’s role in the Electronic Arts buyout tells us about image, power and ‘game-washing

Jacqueline Burgess, University of the Sunshine Coast

Video game publisher Electronic Arts (EA), one of the biggest video game companies in the world behind games such as The Sims and Battlefield, has been sold to a consortium of buyers for US$55 billion (about A$83 billion). It is potentially the largest-ever buyout funded by private equity firms. Not AI, nor mining or banking, but video games.

The members of the consortium include: Silver Lake Partners, an American private global equity firm focusing on technology; the Public Investment Fund (PIF), Saudi Arabia’s sovereign wealth fund; and the investment firm Affinity Partners, run by Jared Kushner, son-in-law of American President Donald Trump.

The consortium will purchase all of the publicly traded company’s shares, making it private. But while the consortium and EA’s shareholders will likely be celebrating – each share was valued at US$210, representing a 25% premium – it’s not all good news.

PIF acquiring EA raises concerns about possible “game-washing”, and less than ideal future business practices.

EA’s poor reputation

Video games are big business. The global video game industry is worth more than the film and music industries combined. But why would these buyers specifically want to buy EA, an entity that has won The Worst Company in America award twice?

It has been criticised for alleged poor labour practices, a focus on online gaming (even when it’s not ideal, such as in single-player stories), and a history of acquiring popular game studios and franchises and running them into the ground.

Players of some of EA’s most beloved franchises, such as The Sims, Dragon Age and Star Wars Battlefront II, believe the games have been negatively impacted due to the company meddling in production, and wanting to focus on online play and micro-transactions.

Microtransactions are small amounts of money paid to access, or potentially access, in-game items or currency. Over time, they can add up to a lot of money, and have even been linked to the creation of problem gambling behaviours. Unsurprisingly, they are not popular among players.

Current global economic stresses have affected video games and other high-tech industries. The development costs of a video game can be hundreds of millions of dollars. EA has reacted to its slowing growth by cancelling games and laying-off close to 2,000 workers since 2023. So a US$55 billion offer probably looked enticing.

Saudi Arabia’s investment spree

In recent years, the Saudi wealth fund has been on an entertainment investment splurge. Before this latest acquisition, PIF invested heavily in both golf and tennis.

It is a sponsor and official naming rights partner of both the Women’s Tennis Association rankings and the Association of Tennis Professionals rankings.

The wealth fund also helped establish the LIV Golf tour in 2022, in opposition to the Professional Golf Association (PGA). By offering huge sums of money, it was able to attract players away from the PGA. One player was reportedly offered US$125 million (A$189 million). This tactic worked; a merger was announced between LIV, the DPA (European golf tour) and the PGA (North American golf tour) in 2023, with PIF as the main funder.

PIF, via its subsidiaries, has also been acquiring stakes in other video game companies. For example, it is one of the largest shareholders in Nintendo, the developer behind Mario, and purchased Niantic (the company behind Pokémon Go) earlier this year for US$3.5 billion (A$5.3 billion)

Why does PIF want video game companies?

Live sport and video games have a few things in common: they are fun, engaging and entertaining. And being known for entertainment is good PR for a country that has been accused of human rights abuses.

PIF’s investment in sport has been called “sportswashing”: using an association with sport to counteract bad publicity and a tarnished moral reputation. Video games, with their interactivity and entertainment value, represent an opportunity for game-washing.

The fact EA owns many sports games’ franchises would also be a bonus, potentially allowing for further video game and sport collaboration. And the fact the video game industry is projected to keep growing globally makes it a good investment for an oil-rich nation looking to economically diversify.

Beyond game-washing concerns, we also need to pay attention to the type of buyout happening here. This is a “leveraged” buyout, meaning part of the purchase price – in this case US$20 billion (A$30 billion) – is funded as debt taken on by the company. So once the acquisition is complete, EA will have US$20 billion of new debt.

With all that new debt to service, it would only be natural to have concerns about more lay-offs, cost-cutting and increasing monetisation via strategies such as microtransactions. Ultimately, this would result in a poorer experience for players. It seems the more things change, the more they stay the same.The Conversation

Jacqueline Burgess, Lecturer in International Business, University of the Sunshine Coast

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Continue Reading

Tech

80% of ransomware victims pay ransom, says report

Hiscox report reveals 80% of ransomware victims pay ransom, but only 60% recover data successfully

Published

on

Hiscox report reveals 80% of ransomware victims pay ransom, but only 60% recover data successfully

video
play-sharp-fill
In Short:
– Cyber attacks increasingly target businesses, with 80% of ransomware victims opting to pay ransoms.
– SMEs are often affected, with only 60% recovering data after paying ransoms amidst rising cyber insurance costs.
Cyber attacks are increasingly targeting sensitive business data, with many companies paying ransoms. A report from Hiscox indicates that 80% of businesses affected by ransomware over the past year opted to pay.The annual Cyber Readiness Report highlights a concerning trend in ransomware attacks against well-known companies, including Marks and Spencer, the Co-op, and Jaguar Land Rover.

The latter recently received a £1.5bn government loan guarantee aimed at protecting its supply chain, which includes numerous small firms.

Banner

Many victims of cyber attacks are small and medium-sized enterprises (SMEs), which often require assistance to recover. Hiscox reported that while 27% of the surveyed 5,750 SMEs faced ransomware attacks, only 60% that paid the ransom managed to recover their data.

Impact on Firms

The broader findings revealed that nearly 60% experienced some form of cyber attack, with numerous businesses attributing their vulnerabilities to artificial intelligence.

Companies face not only financial repercussions, including fines and lost revenue, but also damage to their reputations. Eddie Lamb of Hiscox warned against underestimating the severe consequences of cyber attacks on all business sizes.

Jaguar Land Rover was reportedly finalising cyber insurance when it was attacked, incurring significant losses. Industry experts note that the rising costs of comprehensive cyber insurance policies may leave many firms unprotected. The cyber insurance market is growing, responding to the high-profile impacts experienced by businesses like M&S, which anticipates recovering losses through insurance after its own ransomware incident.


Download the Ticker app

Continue Reading

Tech

OpenAI to launch TikTok-like AI video app Sora

OpenAI to launch Sora, an AI-driven social app with TikTok-like features amid TikTok’s regulatory uncertainties

Published

on

OpenAI to launch Sora, an AI-driven social app with TikTok-like features amid TikTok’s regulatory uncertainties

video
play-sharp-fill
In Short:
– OpenAI is launching Sora 2, a social media app with AI-generated videos, competing with TikTok.
– The app features a unique identity verification system and provides short video content without uploads.
OpenAI is set to unveil Sora 2, a new social media app that imitates TikTok by offering AI-generated video content. The strategy positions OpenAI to directly challenge established platforms in the AI video market.The platform has begun internal testing. Employees have reacted positively, raising productivity concerns among managers. Sora 2 features swipe-to-scroll navigation and offers personalized video recommendations.

Banner

A unique identity verification system allows users to authenticate their likeness for use in AI-generated videos. Users will be notified when their likeness is used in videos, regardless of whether these are published. Video lengths are capped at 10 seconds, with no capability to upload personal content.

The app includes typical social media features like likes and comments, with a user interface that resembles TikTok’s “For You” page.

Strategic Launch

OpenAI’s timing for this launch is strategic, coinciding with uncertainties surrounding TikTok’s U.S. operations. Recent deals aim to transfer majority control of TikTok’s American business to U.S. investors while permitting ByteDance a minority stake.

OpenAI perceives the current turbulence as a unique opportunity to introduce a competitive platform for short-form videos, appealing to users seeking alternatives during this period of regulatory scrutiny.


Download the Ticker app

Continue Reading

Trending Now