Connect with us
https://tickernews.co/wp-content/uploads/2023/10/AmEx-Thought-Leaders.jpg

Tech

What are AI hallucinations? Why AIs sometimes make things up

Published

on

When someone sees something that isn’t there, people often refer to the experience as a hallucination.

Hallucinations occur when your sensory perception does not correspond to external stimuli.

Technologies that rely on artificial intelligence can have hallucinations, too.

When an algorithmic system generates information that seems plausible but is actually inaccurate or misleading, computer scientists call it an AI hallucination. Researchers have found these behaviors in different types of AI systems, from chatbots such as ChatGPT to image generators such as Dall-E to autonomous vehicles. We are information science researchers who have studied hallucinations in AI speech recognition systems.

Wherever AI systems are used in daily life, their hallucinations can pose risks. Some may be minor – when a chatbot gives the wrong answer to a simple question, the user may end up ill-informed. But in other cases, the stakes are much higher. From courtrooms where AI software is used to make sentencing decisions to health insurance companies that use algorithms to determine a patient’s eligibility for coverage, AI hallucinations can have life-altering consequences. They can even be life-threatening: Autonomous vehicles use AI to detect obstacles, other vehicles and pedestrians.

Making it up

Hallucinations and their effects depend on the type of AI system. With large language models – the underlying technology of AI chatbots – hallucinations are pieces of information that sound convincing but are incorrect, made up or irrelevant. An AI chatbot might create a reference to a scientific article that doesn’t exist or provide a historical fact that is simply wrong, yet make it sound believable.

In a 2023 court case, for example, a New York attorney submitted a legal brief that he had written with the help of ChatGPT. A discerning judge later noticed that the brief cited a case that ChatGPT had made up. This could lead to different outcomes in courtrooms if humans were not able to detect the hallucinated piece of information.

With AI tools that can recognize objects in images, hallucinations occur when the AI generates captions that are not faithful to the provided image. Imagine asking a system to list objects in an image that only includes a woman from the chest up talking on a phone and receiving a response that says a woman talking on a phone while sitting on a bench. This inaccurate information could lead to different consequences in contexts where accuracy is critical.

What causes hallucinations

Engineers build AI systems by gathering massive amounts of data and feeding it into a computational system that detects patterns in the data. The system develops methods for responding to questions or performing tasks based on those patterns.

Supply an AI system with 1,000 photos of different breeds of dogs, labeled accordingly, and the system will soon learn to detect the difference between a poodle and a golden retriever. But feed it a photo of a blueberry muffin and, as machine learning researchers have shown, it may tell you that the muffin is a chihuahua.

two side-by-side four-by-four grids of images
Object recognition AIs can have trouble distinguishing between chihuahuas and blueberry muffins and between sheepdogs and mops.
Shenkman et al, CC BY

When a system doesn’t understand the question or the information that it is presented with, it may hallucinate. Hallucinations often occur when the model fills in gaps based on similar contexts from its training data, or when it is built using biased or incomplete training data. This leads to incorrect guesses, as in the case of the mislabeled blueberry muffin.

It’s important to distinguish between AI hallucinations and intentionally creative AI outputs. When an AI system is asked to be creative – like when writing a story or generating artistic images – its novel outputs are expected and desired. Hallucinations, on the other hand, occur when an AI system is asked to provide factual information or perform specific tasks but instead generates incorrect or misleading content while presenting it as accurate.

The key difference lies in the context and purpose: Creativity is appropriate for artistic tasks, while hallucinations are problematic when accuracy and reliability are required.

To address these issues, companies have suggested using high-quality training data and limiting AI responses to follow certain guidelines. Nevertheless, these issues may persist in popular AI tools.

Large language models hallucinate in several ways.

What’s at risk

The impact of an output such as calling a blueberry muffin a chihuahua may seem trivial, but consider the different kinds of technologies that use image recognition systems: An autonomous vehicle that fails to identify objects could lead to a fatal traffic accident. An autonomous military drone that misidentifies a target could put civilians’ lives in danger.

For AI tools that provide automatic speech recognition, hallucinations are AI transcriptions that include words or phrases that were never actually spoken. This is more likely to occur in noisy environments, where an AI system may end up adding new or irrelevant words in an attempt to decipher background noise such as a passing truck or a crying infant.

As these systems become more regularly integrated into health care, social service and legal settings, hallucinations in automatic speech recognition could lead to inaccurate clinical or legal outcomes that harm patients, criminal defendants or families in need of social support.

Check AI’s work

Regardless of AI companies’ efforts to mitigate hallucinations, users should stay vigilant and question AI outputs, especially when they are used in contexts that require precision and accuracy. Double-checking AI-generated information with trusted sources, consulting experts when necessary, and recognizing the limitations of these tools are essential steps for minimizing their risks.The Conversation

Anna Choi, Ph.D. Candidate in Information Science, Cornell University and Katelyn Mei, Ph.D. Student in Information Science, University of Washington

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Tech

SoftBank plans acquisition of DigitalBridge for AI expansion

SoftBank advances towards acquiring DigitalBridge to boost AI infrastructure amid soaring global data center demand

Published

on

SoftBank advances towards acquiring DigitalBridge to boost AI infrastructure amid soaring global data center demand

video
play-sharp-fill
In Short:
– SoftBank may acquire DigitalBridge to enhance its AI infrastructure amid rising global data centre demand.
– The deal could control $108 billion in digital assets, with financial details yet to be disclosed.

SoftBank Group is reportedly in advanced talks to acquire DigitalBridge Group, a move that would dramatically expand the Japanese conglomerate’s control over critical AI infrastructure as global demand for data centres accelerates. The potential deal, which could be announced within days, would give SoftBank exposure to roughly $108 billion in digital infrastructure assets, including data centres, cell towers and fibre networks. While financial terms remain undisclosed, the talks are said to be at an advanced stage.

The acquisition fits squarely into founder Masayoshi Son’s renewed bet on artificial intelligence and computing capacity. DigitalBridge manages investments in major data centre operators such as Vantage Data Centers, Switch, DataBank and AtlasEdge, placing SoftBank at the centre of the infrastructure powering next-generation AI. The company is also a key participant in Stargate, a $500 billion private-sector AI initiative announced earlier this year, and recently agreed to buy ABB’s robotics division as part of its broader push into physical AI.

Intensifying competition

Markets have reacted strongly to the prospect of the deal, with DigitalBridge shares surging as much as 47% after the initial reports emerged. The rally highlights intensifying competition for data centre assets, as AI drives unprecedented demand for computing power. McKinsey estimates AI-related infrastructure spending could reach $6.7 trillion by 2030, while Goldman Sachs forecasts global data centre power consumption will rise 175% from 2023 levels by the end of the decade. If completed, the acquisition would mark SoftBank’s return to direct ownership of a major digital infrastructure platform at a pivotal moment in the AI race.


Download the Ticker app

Continue Reading

Tech

Italy orders Meta to open WhatsApp to AI competitors

Italy orders Meta to allow rival AI chatbots on WhatsApp amid regulatory battle over market dominance

Published

on

Italy orders Meta to allow rival AI chatbots on WhatsApp amid regulatory battle over market dominance

video
play-sharp-fill
In Short:
– Italy’s antitrust authority requires Meta to allow access to rival AI chatbots on WhatsApp during an investigation.
– Meta plans to appeal the ruling, claiming it disrupts their system and questioning WhatsApp’s role as an AI service platform.

Italy’s antitrust authority has ordered Meta to allow competing AI chatbots access to WhatsApp, suspending rules that blocked rivals. The decision comes amid concerns that Meta’s policies could limit competition and harm consumers in the rapidly growing AI services market. Meta plans to appeal, calling the ruling “fundamentally flawed” and arguing that WhatsApp wasn’t designed to support third-party AI chatbots.

The Italian Competition Authority began investigating Meta after its March 2025 launch of Meta AI on WhatsApp, later expanding the probe to cover updated business terms that excluded rival AI providers, such as ChatGPT, Microsoft Copilot, and Perplexity. The European Commission has launched a parallel investigation, highlighting growing regulatory scrutiny on tech giants in Europe.

Europe’s stricter stance on Big Tech has sparked pushback from the industry and political figures in the U.S., including former President Donald Trump. Meta maintains that its Business API restrictions still allow AI for customer support and order tracking, but says general-purpose chatbot distribution falls outside its intended use.


Download the Ticker app

Continue Reading

Tech

China’s maglev breakthrough hits 700 km/h in seconds, reshaping the future of transport

China sets world record with maglev train hitting 700 km/h in just two seconds, revolutionising ultra-high-speed transport

Published

on

China sets world record with maglev train hitting 700 km/h in just two seconds, revolutionising ultra-high-speed transport

video
play-sharp-fill
In Short:
– Chinese researchers set a world record, accelerating a test vehicle to 700 km/h in two seconds.
– This milestone positions China as a leader in ultra-high-speed maglev technology and future transport developments.

China has set a new world record in magnetic levitation technology after accelerating a ton-class superconducting maglev test vehicle to 700 kilometres per hour in just two seconds. The achievement, reported by state broadcaster CCTV, marks the fastest acceleration ever recorded for an electric maglev system and cements China’s position at the forefront of ultra-high-speed transport innovation.

The test was conducted by researchers at the National University of Defense Technology on a 400-metre track, where footage showed the vehicle flashing across the rail-like structure in a blur, leaving a misty trail behind it. The breakthrough follows more than a decade of research tackling complex challenges such as ultra-high-speed electromagnetic propulsion, electric suspension guidance systems, and high-field superconducting magnets, all of which are critical to stable travel at extreme speeds.

Hyperloop technology

Beyond headline-grabbing velocity, the milestone opens the door to future transport systems, including vacuum-tube maglev networks, commonly referred to as hyperloop technology. Scientists say the same advancements could also be applied to aerospace launch assistance, electromagnetic launch systems, and advanced experimental testing. According to Professor Li Jie from the National University of Defense Technology, the successful trial will significantly accelerate China’s research into frontier technologies, with future work focusing on pipeline-based high-speed transport and aerospace equipment testing.

While China now leads in superconducting maglev acceleration, global competition remains fierce. Japan still holds the record for the fastest manned train, with its L0 Series maglev reaching 603 kilometres per hour during testing in 2015. China, however, operates the world’s only commercial maglev service — the Shanghai Maglev — which currently runs at 300 kilometres per hour after its top speed was reduced from 431 kilometres per hour in 2021.

The December test builds on earlier progress made this year, including a 1.1-ton test sled that reached 650 kilometres per hour in seven seconds over a 600-metre track in June 2025. Together, these developments signal rapid momentum in China’s push toward next-generation transport systems that could redefine how people and payloads move across the planet.


Download the Ticker app

Continue Reading

Trending Now