Connect with us
https://tickernews.co/wp-content/uploads/2023/10/AmEx-Thought-Leaders.jpg

Tech

Plans to ban Texas kids from social media

Published

on

A new bill in Texas is planning to ban children in the state from using social media

A state representative in Texas has developed a bill, proposing banning all minors from using social media platforms.

The bill wants anyone under the age of 18 in the state prohibited from using all social media. This includes platforms like TikTok, Instagram, Facebook, Twitter and YouTube.

HB896 introduced by Texas Rep. Jared Patterson, will require all social media users to prove their age.

Patterson wants put measures in place to protect children from harmful mental health effects of social media.

The legislation will ban accounts being created by minors. It will also require photo identification to verify that users are over the age of 18 before an account is approved.

The bill will also allow parents to request account removal of their child, and grants enforcement of deceptive trade practices to the Office of the Attorney General if violated.

“The harms social media poses to minors are demonstrable not just in the internal research from the very social media companies that create these addictive products, but in the skyrocketing depression, anxiety, and even suicide rates we are seeing afflict children

We are tremendously grateful for Rep. Jared Patterson’s leadership on keeping this precious population safe, and TPPF is fully supportive of prohibiting social media access to minors to prevent the perpetual harms of social media from devastating the next generation of Texans.”

Greg Sindelar, CEO of the Texas Public Policy Foundation

Patterson described social media sites as “the pre-1964 cigarette,” with the public believing they were safe before in-depth research provided evidence of their harmful effects.

If the bill is passed, it will be the first of its kind to prohibit minors using social media platforms. It will lookregulate the likes of Meta, and ByteDance regarding minors using its platforms.

Both TikTok and Instagram have minimum age policies requiring users to be at least 13 years of age.

While both platforms enforce age verification measures, users under 13 are still on the platforms. This means parents are largely left to police their children’s use.

Some platforms try to address this issue by providing a range of safety tools for parents.

The level of government intervention proposed in Texas will be interesting to observe if it is passed and how it will be enforced.

It will also be fascinating to watch the societal and cultural impacts of such legislation and if other governments will also jump on board if it is effective.

By Dr Karen Sutherland, University of the Sunshine Coast and Dharana Digital

Dr Karen Sutherland is a Senior Lecturer at the University of the Sunshine Coast where she designs and delivers social media education and research. Dr Sutherland is also the Co-Founder and Social Media Specialist at Dharana Digital marketing agency focused on helping people working in the health and wellness space.

Continue Reading

Tech

Ant Group cuts AI costs using Chinese semiconductors

Ant Group uses Chinese semiconductors to cut AI training costs by 20%, competing with US firms like Nvidia.

Published

on

Ant Group uses Chinese semiconductors to cut AI training costs by 20%, competing with US firms like Nvidia.

In Short

Jack Ma-backed Ant Group has developed cost-effective AI training techniques using Chinese semiconductors, cutting costs by 20% and producing results comparable to Nvidia. As the company pivots towards local alternatives in response to US bans, its models may significantly enhance Chinese AI development and reduce costs for services.

Jack Ma-backed Ant Group Co. has developed cost-effective techniques for training AI models using Chinese-made semiconductors, reportedly reducing costs by 20%.

The company utilised domestically produced chips from affiliates like Alibaba and Huawei, employing the Mixture of Experts machine learning method, which produced results comparable to Nvidia’s H800 chips.

While Ant continues to use Nvidia for some AI development, it is increasingly leveraging alternatives such as Advanced Micro Devices and Chinese chips for its latest models.

This development positions Ant in competition with Chinese and US firms, especially following DeepSeek’s demonstration of cost-effective model training compared to major investments by OpenAI and Google.

The move highlights the shift of Chinese companies towards local alternatives in response to the US ban on advanced Nvidia semiconductors, including the powerful H800 model.

Ant recently published a research paper claiming that its models sometimes outperform those of Meta in specific benchmarks, a claim that Bloomberg has not independently verified. If confirmed, these models could significantly advance Chinese AI development by reducing inference costs for AI services.

As AI investment grows, Mixture of Experts models are becoming widely adopted due to their efficiency, dividing tasks into smaller data sets.

Continue Reading

Tech

‘Literally just child gambling’: what kids say about Roblox

Published

on

‘Literally just child gambling’: what kids say about Roblox, lootboxes and money in online games

Roblox is one of the world’s most popular online platforms for children, offering a variety of “experiences” including games and virtual spaces. Most of the experiences are free, but offer upgrades, bonuses and random items in exchange for cash.

What do kids make of it? In new research, we interviewed 22 children aged seven to 14 (and their parents) from November 2023 to July 2024. Some 18 of the 22 played Roblox.

In the interviews, we gave children an A$20 debit card to spend however they liked, to help us understand children’s decision-making around spending. While four children purchased non-digital items with their debit card (such as bicycle parts, toys and lollies), 12 children made purchases in Roblox.

We found children greatly value their Roblox purchases – but complain of “scary” and complex transactions, describe random reward systems as “child gambling”, and talk of “scams” and “cash grabs”, with the platform’s inflexible refund policy providing little recourse.

What is Roblox?

Created in 2006, Roblox bills itself as “the ultimate virtual universe that lets you create, share experiences with friends, and be anything you can imagine”. There are 380 million monthly active users globally.

Around 42% of Roblox players are under 13 years old. In 2024, a study found Australian players aged four to 18 spent an average 137 minutes a day on it.

Roblox has come under fire in recent years, particularly for the prevalence of grooming and child abuse on the platform. Despite parental controls, many feel that it’s still not doing enough to protect children.

Much of Roblox’s US$3.6 billion revenue in 2024 was generated via in-game microtransactions, particularly through purchases of its virtual currency Robux.

Free to play – but plenty to pay for

Screenshots of an account with a birthday in 2013 and a game screen showing a popup reading 'Buy Big Gift for $199 each?'
Researchers created a Roblox account with a listed age of 12, and could immediately purchase random reward items in the Adopt Me! game.
Roblox/Hardwick & Carter

It’s free to play Roblox. But Roblox and Roblox creators (people who make the platform’s “experiences”) make money via in-game purchases.

In Roblox experiences, players can purchase all sorts of things – cosmetic items to change the appearance of player avatars, functional items to use in games, and passes which give access to games or VIP experiences.

Some Roblox games also offer random reward mechanics such as lootboxes, which offer players a chance-based outcome or prize (sometimes via monetary purchases).

Lootboxes were banned for users under 15 in Australia in 2024. However, we found many of Roblox’s most popular games still have random reward mechanics for sale to accounts under 15 years of age.

In response to questions from The Conversation, a Roblox spokesperson wrote:

As a user-generated content platform, we provide our developer community with tools, information and guidelines that apply to aspects of gameplay within their games and experiences, including the recent classification update relating to paid random items. We take action on reports of developers not following guidelines or not using our tools properly to meet local compliance requirements.

Concerns about children’s digital game spending often focus on the idea that engaging with random reward mechanics might later lead to problem gambling.

While this remains the subject of ongoing research, our research shows Roblox’s spending features already harm children now. Children already feel misled or deceived.

Random rewards and ‘child gambling’

Many of Roblox’s most popular games, such as Adopt Me!, Blox Fruits and Pet Simulator 99, include random reward features. Players can purchase items of random rarity, and can often use or trade these items with other players.

One child in our study explained that playing these games is “literally just child gambling”.

Random reward mechanics are confusing for children who may not have a strong understanding of statistics or risk. This caused conflict in the families we spoke to, when children were disappointed or upset by not receiving a “good” reward.

Our research echoes earlier work identifying harms to children from monetised random reward systems.

‘Scary’ virtual currencies

Roblox also uses virtual currencies, which must be purchased using “real” currency. For instance, A$8.49 or US$5.00 will purchase 400 Robux to spend in games.

Some popular Roblox games then have their own virtual currency. Players must first convert real-world money into Robux, then convert the Robux into a game’s currency of “diamonds” or “gems”.

Some children we spoke to had sophisticated ways to handle these conversions – such as online Robux calculators or mental maths. However, other children struggled.

An 11-year-old described navigating nested virtual currencies as “scary”. A 13-year-old, when asked how much they thought Robux cost in Australian dollars, said, “I can’t even begin to grasp that.”

Virtual currencies make it difficult for children to discern the true price of items they want to buy in digital games. This leads to children spending more than they realise in games – something that concerns them.

Children referred to many of these in-game spending features and outcomes as “scams”, “tricks” and “cash grabs”. Although children value their in-game purchases, and parents use in-game spending to teach values around saving and spending money responsibly, these features ultimately harm children.

Current protections are not enough

Digital games have demonstrated benefits for childrens’s education, social lives and identity development. Children also value the items they purchase in digital games. However, efforts to make money from games aimed at children may have significant financial and emotional impact.

Our research does not suggest monetisation features should be barred from children’s games. But our findings indicate that policy regarding children’s digital safety should try to minimise harm to children as a result of their digital spending.

In particular, we conclude that monetised random reward mechanics and virtual currencies are not appropriate in children’s games.

Other countries have struggled to regulate lootboxes effectively. Current legislation, such as the Australian classification changes introduced last September, which ban lootboxes for players under 15, is not fit for purpose. Roblox is currently rated PG on Google Play store and 12+ on the Apple App Store, despite many of its most popular games including paid chance-based content.

Our interviews also found that parents feel lost navigating the complexities of these games, and are extremely anxious about how their children are being monetised.

Australia’s eSafety Commissioner has argued that the way forward for children’s safety online is “safety by design”. In this approach, digital service providers must design their services with the safety of users as a top priority.

In our conversations with children, we found this is not currently the case – but could be a good starting point.The Conversation

Taylor Hardwick, Postdoctoral Research Fellow in the School of Architecture, Design and Planning, University of Sydney and Marcus Carter, Professor in Human-Computer Interaction, ARC Future Fellow, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Continue Reading

Tech

What are AI hallucinations? Why AIs sometimes make things up

Published

on

When someone sees something that isn’t there, people often refer to the experience as a hallucination.

Hallucinations occur when your sensory perception does not correspond to external stimuli.

Technologies that rely on artificial intelligence can have hallucinations, too.

When an algorithmic system generates information that seems plausible but is actually inaccurate or misleading, computer scientists call it an AI hallucination. Researchers have found these behaviors in different types of AI systems, from chatbots such as ChatGPT to image generators such as Dall-E to autonomous vehicles. We are information science researchers who have studied hallucinations in AI speech recognition systems.

Wherever AI systems are used in daily life, their hallucinations can pose risks. Some may be minor – when a chatbot gives the wrong answer to a simple question, the user may end up ill-informed. But in other cases, the stakes are much higher. From courtrooms where AI software is used to make sentencing decisions to health insurance companies that use algorithms to determine a patient’s eligibility for coverage, AI hallucinations can have life-altering consequences. They can even be life-threatening: Autonomous vehicles use AI to detect obstacles, other vehicles and pedestrians.

Making it up

Hallucinations and their effects depend on the type of AI system. With large language models – the underlying technology of AI chatbots – hallucinations are pieces of information that sound convincing but are incorrect, made up or irrelevant. An AI chatbot might create a reference to a scientific article that doesn’t exist or provide a historical fact that is simply wrong, yet make it sound believable.

In a 2023 court case, for example, a New York attorney submitted a legal brief that he had written with the help of ChatGPT. A discerning judge later noticed that the brief cited a case that ChatGPT had made up. This could lead to different outcomes in courtrooms if humans were not able to detect the hallucinated piece of information.

With AI tools that can recognize objects in images, hallucinations occur when the AI generates captions that are not faithful to the provided image. Imagine asking a system to list objects in an image that only includes a woman from the chest up talking on a phone and receiving a response that says a woman talking on a phone while sitting on a bench. This inaccurate information could lead to different consequences in contexts where accuracy is critical.

What causes hallucinations

Engineers build AI systems by gathering massive amounts of data and feeding it into a computational system that detects patterns in the data. The system develops methods for responding to questions or performing tasks based on those patterns.

Supply an AI system with 1,000 photos of different breeds of dogs, labeled accordingly, and the system will soon learn to detect the difference between a poodle and a golden retriever. But feed it a photo of a blueberry muffin and, as machine learning researchers have shown, it may tell you that the muffin is a chihuahua.

two side-by-side four-by-four grids of images
Object recognition AIs can have trouble distinguishing between chihuahuas and blueberry muffins and between sheepdogs and mops.
Shenkman et al, CC BY

When a system doesn’t understand the question or the information that it is presented with, it may hallucinate. Hallucinations often occur when the model fills in gaps based on similar contexts from its training data, or when it is built using biased or incomplete training data. This leads to incorrect guesses, as in the case of the mislabeled blueberry muffin.

It’s important to distinguish between AI hallucinations and intentionally creative AI outputs. When an AI system is asked to be creative – like when writing a story or generating artistic images – its novel outputs are expected and desired. Hallucinations, on the other hand, occur when an AI system is asked to provide factual information or perform specific tasks but instead generates incorrect or misleading content while presenting it as accurate.

The key difference lies in the context and purpose: Creativity is appropriate for artistic tasks, while hallucinations are problematic when accuracy and reliability are required.

To address these issues, companies have suggested using high-quality training data and limiting AI responses to follow certain guidelines. Nevertheless, these issues may persist in popular AI tools.

Large language models hallucinate in several ways.

What’s at risk

The impact of an output such as calling a blueberry muffin a chihuahua may seem trivial, but consider the different kinds of technologies that use image recognition systems: An autonomous vehicle that fails to identify objects could lead to a fatal traffic accident. An autonomous military drone that misidentifies a target could put civilians’ lives in danger.

For AI tools that provide automatic speech recognition, hallucinations are AI transcriptions that include words or phrases that were never actually spoken. This is more likely to occur in noisy environments, where an AI system may end up adding new or irrelevant words in an attempt to decipher background noise such as a passing truck or a crying infant.

As these systems become more regularly integrated into health care, social service and legal settings, hallucinations in automatic speech recognition could lead to inaccurate clinical or legal outcomes that harm patients, criminal defendants or families in need of social support.

Check AI’s work

Regardless of AI companies’ efforts to mitigate hallucinations, users should stay vigilant and question AI outputs, especially when they are used in contexts that require precision and accuracy. Double-checking AI-generated information with trusted sources, consulting experts when necessary, and recognizing the limitations of these tools are essential steps for minimizing their risks.The Conversation

Anna Choi, Ph.D. Candidate in Information Science, Cornell University and Katelyn Mei, Ph.D. Student in Information Science, University of Washington

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Continue Reading

Trending Now