Connect with us
https://tickernews.co/wp-content/uploads/2023/10/AmEx-Thought-Leaders.jpg

Tech

Indian Government takes on Twitter in battle of power

Published

on

Indian COVID crisis

As India mounts pressure on Twitter over the COVID pandemic, concerns are growing that social media platforms are becoming more powerful than governments.

Andrew Selepak, a social media professor at the University of Florida, says companies like Twitter are playing from their own rule book.

“They are applying their own rules [and] regulations to free speech regardless of local laws and regulations,” he told Ticker News Live.

India is removing critical posts about COVID from Twitter

India has asked Twitter to remove hundreds of tweets critical of its handling of the COVID pandemic.

Around half of all new daily global COVID-19 cases came from India. The nation’s hospitals have run out of oxygen and hospitals are above capacity.

“The Indian Government has been very unhappy with certain accounts being able to spread misinformation or just say anything negative about the Government,” he added.

Twitter is pushing back

Meanwhile, it’s not the first time Twitter and India have clashed. The country also ordered the removal of over 1,000 accounts in February. New Delhi claimed the tweets spread misinformation amid protests over new agriculture reforms.

Twitter first refused to comply. The tech giant later buckled to pressure from the IT ministry by blocking access to the bulk of accounts.

“[Twitter] believes there is a right for people to engage in free speech. It is one of these things where you’ve got international companies that are more powerful than any one Government,” he said.

https://twitter.com/TwitterIndia/status/1386608572377694210

Misinformation is a growing issue

It comes on the back of growing concern over fake news. Professor Selepak says reliance on social media platforms for information is becoming an issue.

“It’s how people are getting their news these days. It’s how individuals are deciding social issues to political issues,” he said.

However, Selepak says the problem is that there is little oversight when it comes to the facts.

“Where that becomes a sticky situation is the fact that the information isn’t from reputable news sources. It’s the most significant place for people to learn about their politicians [and] issues,” he said.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

‘Literally just child gambling’: what kids say about Roblox

Published

on

‘Literally just child gambling’: what kids say about Roblox, lootboxes and money in online games

Roblox is one of the world’s most popular online platforms for children, offering a variety of “experiences” including games and virtual spaces. Most of the experiences are free, but offer upgrades, bonuses and random items in exchange for cash.

What do kids make of it? In new research, we interviewed 22 children aged seven to 14 (and their parents) from November 2023 to July 2024. Some 18 of the 22 played Roblox.

In the interviews, we gave children an A$20 debit card to spend however they liked, to help us understand children’s decision-making around spending. While four children purchased non-digital items with their debit card (such as bicycle parts, toys and lollies), 12 children made purchases in Roblox.

We found children greatly value their Roblox purchases – but complain of “scary” and complex transactions, describe random reward systems as “child gambling”, and talk of “scams” and “cash grabs”, with the platform’s inflexible refund policy providing little recourse.

What is Roblox?

Created in 2006, Roblox bills itself as “the ultimate virtual universe that lets you create, share experiences with friends, and be anything you can imagine”. There are 380 million monthly active users globally.

Around 42% of Roblox players are under 13 years old. In 2024, a study found Australian players aged four to 18 spent an average 137 minutes a day on it.

Roblox has come under fire in recent years, particularly for the prevalence of grooming and child abuse on the platform. Despite parental controls, many feel that it’s still not doing enough to protect children.

Much of Roblox’s US$3.6 billion revenue in 2024 was generated via in-game microtransactions, particularly through purchases of its virtual currency Robux.

Free to play – but plenty to pay for

Screenshots of an account with a birthday in 2013 and a game screen showing a popup reading 'Buy Big Gift for $199 each?'
Researchers created a Roblox account with a listed age of 12, and could immediately purchase random reward items in the Adopt Me! game.
Roblox/Hardwick & Carter

It’s free to play Roblox. But Roblox and Roblox creators (people who make the platform’s “experiences”) make money via in-game purchases.

In Roblox experiences, players can purchase all sorts of things – cosmetic items to change the appearance of player avatars, functional items to use in games, and passes which give access to games or VIP experiences.

Some Roblox games also offer random reward mechanics such as lootboxes, which offer players a chance-based outcome or prize (sometimes via monetary purchases).

Lootboxes were banned for users under 15 in Australia in 2024. However, we found many of Roblox’s most popular games still have random reward mechanics for sale to accounts under 15 years of age.

In response to questions from The Conversation, a Roblox spokesperson wrote:

As a user-generated content platform, we provide our developer community with tools, information and guidelines that apply to aspects of gameplay within their games and experiences, including the recent classification update relating to paid random items. We take action on reports of developers not following guidelines or not using our tools properly to meet local compliance requirements.

Concerns about children’s digital game spending often focus on the idea that engaging with random reward mechanics might later lead to problem gambling.

While this remains the subject of ongoing research, our research shows Roblox’s spending features already harm children now. Children already feel misled or deceived.

Random rewards and ‘child gambling’

Many of Roblox’s most popular games, such as Adopt Me!, Blox Fruits and Pet Simulator 99, include random reward features. Players can purchase items of random rarity, and can often use or trade these items with other players.

One child in our study explained that playing these games is “literally just child gambling”.

Random reward mechanics are confusing for children who may not have a strong understanding of statistics or risk. This caused conflict in the families we spoke to, when children were disappointed or upset by not receiving a “good” reward.

Our research echoes earlier work identifying harms to children from monetised random reward systems.

‘Scary’ virtual currencies

Roblox also uses virtual currencies, which must be purchased using “real” currency. For instance, A$8.49 or US$5.00 will purchase 400 Robux to spend in games.

Some popular Roblox games then have their own virtual currency. Players must first convert real-world money into Robux, then convert the Robux into a game’s currency of “diamonds” or “gems”.

Some children we spoke to had sophisticated ways to handle these conversions – such as online Robux calculators or mental maths. However, other children struggled.

An 11-year-old described navigating nested virtual currencies as “scary”. A 13-year-old, when asked how much they thought Robux cost in Australian dollars, said, “I can’t even begin to grasp that.”

Virtual currencies make it difficult for children to discern the true price of items they want to buy in digital games. This leads to children spending more than they realise in games – something that concerns them.

Children referred to many of these in-game spending features and outcomes as “scams”, “tricks” and “cash grabs”. Although children value their in-game purchases, and parents use in-game spending to teach values around saving and spending money responsibly, these features ultimately harm children.

Current protections are not enough

Digital games have demonstrated benefits for childrens’s education, social lives and identity development. Children also value the items they purchase in digital games. However, efforts to make money from games aimed at children may have significant financial and emotional impact.

Our research does not suggest monetisation features should be barred from children’s games. But our findings indicate that policy regarding children’s digital safety should try to minimise harm to children as a result of their digital spending.

In particular, we conclude that monetised random reward mechanics and virtual currencies are not appropriate in children’s games.

Other countries have struggled to regulate lootboxes effectively. Current legislation, such as the Australian classification changes introduced last September, which ban lootboxes for players under 15, is not fit for purpose. Roblox is currently rated PG on Google Play store and 12+ on the Apple App Store, despite many of its most popular games including paid chance-based content.

Our interviews also found that parents feel lost navigating the complexities of these games, and are extremely anxious about how their children are being monetised.

Australia’s eSafety Commissioner has argued that the way forward for children’s safety online is “safety by design”. In this approach, digital service providers must design their services with the safety of users as a top priority.

In our conversations with children, we found this is not currently the case – but could be a good starting point.The Conversation

Taylor Hardwick, Postdoctoral Research Fellow in the School of Architecture, Design and Planning, University of Sydney and Marcus Carter, Professor in Human-Computer Interaction, ARC Future Fellow, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Continue Reading

Tech

What are AI hallucinations? Why AIs sometimes make things up

Published

on

When someone sees something that isn’t there, people often refer to the experience as a hallucination.

Hallucinations occur when your sensory perception does not correspond to external stimuli.

Technologies that rely on artificial intelligence can have hallucinations, too.

When an algorithmic system generates information that seems plausible but is actually inaccurate or misleading, computer scientists call it an AI hallucination. Researchers have found these behaviors in different types of AI systems, from chatbots such as ChatGPT to image generators such as Dall-E to autonomous vehicles. We are information science researchers who have studied hallucinations in AI speech recognition systems.

Wherever AI systems are used in daily life, their hallucinations can pose risks. Some may be minor – when a chatbot gives the wrong answer to a simple question, the user may end up ill-informed. But in other cases, the stakes are much higher. From courtrooms where AI software is used to make sentencing decisions to health insurance companies that use algorithms to determine a patient’s eligibility for coverage, AI hallucinations can have life-altering consequences. They can even be life-threatening: Autonomous vehicles use AI to detect obstacles, other vehicles and pedestrians.

Making it up

Hallucinations and their effects depend on the type of AI system. With large language models – the underlying technology of AI chatbots – hallucinations are pieces of information that sound convincing but are incorrect, made up or irrelevant. An AI chatbot might create a reference to a scientific article that doesn’t exist or provide a historical fact that is simply wrong, yet make it sound believable.

In a 2023 court case, for example, a New York attorney submitted a legal brief that he had written with the help of ChatGPT. A discerning judge later noticed that the brief cited a case that ChatGPT had made up. This could lead to different outcomes in courtrooms if humans were not able to detect the hallucinated piece of information.

With AI tools that can recognize objects in images, hallucinations occur when the AI generates captions that are not faithful to the provided image. Imagine asking a system to list objects in an image that only includes a woman from the chest up talking on a phone and receiving a response that says a woman talking on a phone while sitting on a bench. This inaccurate information could lead to different consequences in contexts where accuracy is critical.

What causes hallucinations

Engineers build AI systems by gathering massive amounts of data and feeding it into a computational system that detects patterns in the data. The system develops methods for responding to questions or performing tasks based on those patterns.

Supply an AI system with 1,000 photos of different breeds of dogs, labeled accordingly, and the system will soon learn to detect the difference between a poodle and a golden retriever. But feed it a photo of a blueberry muffin and, as machine learning researchers have shown, it may tell you that the muffin is a chihuahua.

two side-by-side four-by-four grids of images
Object recognition AIs can have trouble distinguishing between chihuahuas and blueberry muffins and between sheepdogs and mops.
Shenkman et al, CC BY

When a system doesn’t understand the question or the information that it is presented with, it may hallucinate. Hallucinations often occur when the model fills in gaps based on similar contexts from its training data, or when it is built using biased or incomplete training data. This leads to incorrect guesses, as in the case of the mislabeled blueberry muffin.

It’s important to distinguish between AI hallucinations and intentionally creative AI outputs. When an AI system is asked to be creative – like when writing a story or generating artistic images – its novel outputs are expected and desired. Hallucinations, on the other hand, occur when an AI system is asked to provide factual information or perform specific tasks but instead generates incorrect or misleading content while presenting it as accurate.

The key difference lies in the context and purpose: Creativity is appropriate for artistic tasks, while hallucinations are problematic when accuracy and reliability are required.

To address these issues, companies have suggested using high-quality training data and limiting AI responses to follow certain guidelines. Nevertheless, these issues may persist in popular AI tools.

Large language models hallucinate in several ways.

What’s at risk

The impact of an output such as calling a blueberry muffin a chihuahua may seem trivial, but consider the different kinds of technologies that use image recognition systems: An autonomous vehicle that fails to identify objects could lead to a fatal traffic accident. An autonomous military drone that misidentifies a target could put civilians’ lives in danger.

For AI tools that provide automatic speech recognition, hallucinations are AI transcriptions that include words or phrases that were never actually spoken. This is more likely to occur in noisy environments, where an AI system may end up adding new or irrelevant words in an attempt to decipher background noise such as a passing truck or a crying infant.

As these systems become more regularly integrated into health care, social service and legal settings, hallucinations in automatic speech recognition could lead to inaccurate clinical or legal outcomes that harm patients, criminal defendants or families in need of social support.

Check AI’s work

Regardless of AI companies’ efforts to mitigate hallucinations, users should stay vigilant and question AI outputs, especially when they are used in contexts that require precision and accuracy. Double-checking AI-generated information with trusted sources, consulting experts when necessary, and recognizing the limitations of these tools are essential steps for minimizing their risks.The Conversation

Anna Choi, Ph.D. Candidate in Information Science, Cornell University and Katelyn Mei, Ph.D. Student in Information Science, University of Washington

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Continue Reading

Tech

Apple shakes up AI leadership to enhance Siri performance

Apple reshuffles AI executives, appointing Mike Rockwell to lead Siri, amid ongoing struggles and delays in AI product development.

Published

on

Apple reshuffles AI executives, appointing Mike Rockwell to lead Siri, amid ongoing struggles and delays in AI product development.

In Short

Apple is reorganising its AI leadership, with Mike Rockwell taking charge of Siri following CEO Tim Cook’s concerns about John Giannandrea. This shift aims to enhance Siri’s development amidst delays and competition in the AI market.

Apple Inc. is reorganising its artificial intelligence leadership to enhance the development of Siri following recent delays.

CEO Tim Cook has expressed a lack of confidence in AI head John Giannandrea, prompting the appointment of Mike Rockwell, the creator of the Vision Pro headset, to oversee Siri.

Rockwell will report to software chief Craig Federighi, effectively removing Siri from Giannandrea’s jurisdiction. Apple plans to announce these changes to employees this week.

Lagging competitors

The company faces pressure as its AI technology continues to lag behind competitors. Despite promoting AI features for the iPhone 16, these features have been delayed, leading to dissatisfaction amongst staff.

Rockwell, an experienced executive known for product innovation, will replace Giannandrea but will not cause his immediate departure from the company. Giannandrea will continue managing AI research, testing, and related technologies.

Over the years, Siri has been led by several executives, and Giannandrea took over in 2018. Apple’s internal discourse increasingly aligns AI initiatives with hardware, which may support deeper integration of AI in future products.

The upcoming AI changes within Apple were anticipated, as the company has been adjusting personnel to strengthen the Siri team. Rockwell’s background may enable improved personalisation of Siri, reflecting years of proposed enhancements.

Continue Reading

Trending Now