Connect with us
https://tickernews.co/wp-content/uploads/2023/10/AmEx-Thought-Leaders.jpg

Tech

Apple urged to abandon child safety features

Published

on

The tech giant is defending its new features, aimed at preventing the spread of child sexual abuse material, despite mounting pressure from privacy advocates.

Apple plans to scan iCloud photos for child sexual abuse images, and says its “method of detecting known CSAM (child sexual abuse material) is designed with user privacy in mind”.

The company has also announced a parental control option, which warns children and their parents when they are about to view or send sexually explicit photos in the Messages app.

But privacy groups claim the new features will “create new risks for children”.

Concerns have also been raised that the scanning software “could be used to censor speech and threaten the privacy and security of people around the world”.

A coalition of more than 90 rights groups has now written to Apple CEO Tim Cook, outlining their concerns, and urging the tech titan to abandon its plans to introduce the new features.

The signatories include civil rights, human rights and digital rights groups.


“Though these capabilities are intended to protect children and to reduce the spread of child sexual abuse material (CSAM), we are concerned that they will be used to censor protected speech, threaten the privacy and security of people around the world, and have disastrous consequences for many children.”

Letter sent to apple ceo tim cook

The coalition of rights groups has raised concerns that the scan and alert feature in Messages “could result in alerts that threaten the safety and wellbeing of some young people.

The groups say LGBTQ+ youths with unsympathetic parents are particularly at risk.

They also claim that once the “CSAM hash scanning for photos is built into Apple products, the company will face enormous pressure, and possibly legal requirements, from governments around the world to scan for all sorts of images that the governments find objectionable”.

Apple defends its child safety features

Apple has sought to allay concerns, pushing back against claims that the technology will be used for other purposes.

The trillion-dollar company insists it won’t give in to pressure from any government to use the technology for other surveillance purposes.

Apple says it “will refuse any such demands”

“Let us be clear, this technology is limited to detecting CSAM child sexual abuse material stored in iCloud and we will not accede to any government’s request to expand it.”

“We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands. We will continue to refuse them in the future,” Apple said in a recent FAQ.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Anthropic CEO holds key Pentagon talks on AI ethics and military use

Anthropic CEO to meet Defense Secretary Hegseth on ethical AI deployment and DOD contract discussions.

Published

on

Anthropic CEO to meet Defense Secretary Hegseth on ethical AI deployment and DOD contract discussions.

Anthropic’s CEO is scheduled to meet Defense Secretary Pete Hegseth at the Pentagon to discuss the use of the startup’s artificial intelligence models in military applications. The meeting comes as the Department of Defense seeks clarity on how Anthropic’s AI can be integrated into its operations.

Negotiations between Anthropic and the DOD have recently faced challenges over terms of use. Anthropic is pushing for safeguards to ensure its models are not used for autonomous weapons or domestic surveillance, while the DOD wants full flexibility to deploy the technology for all lawful purposes.

Currently, Anthropic is the only AI company deployed on the DOD’s classified networks, holding a $200 million contract. This meeting could be pivotal in resolving tensions and strengthening collaboration between the AI startup and the U.S. government.

Subscribe to never miss an episode of Ticker – https://www.youtube.com/@weareticker


Download the Ticker app

Continue Reading

Tech

Apple’s next AI wearables could change how we use tech

Apple is launching smart glasses, an AI pendant, and camera-equipped AirPods with upgraded Siri by 2027.

Published

on

Apple is launching smart glasses, an AI pendant, and camera-equipped AirPods with upgraded Siri by 2027.

Apple is accelerating its wearable tech game, developing three cutting-edge devices featuring an upgraded Siri powered by Google’s Gemini AI models. The tech giant is betting big on AI to enhance user interaction across smart glasses, AirPods, and a unique AI pendant.

The N50 smart glasses will come equipped with dual cameras and are slated for a 2027 release, with prototypes already in the hands of Apple’s hardware engineers. Production is expected to ramp up by December 2026, signaling Apple’s commitment to merging AI with everyday accessories.

Meanwhile, Apple is also working on a camera-equipped AirPods model and an AI pendant that can be worn as a necklace or clipped to clothing, featuring cameras, microphones, and a speaker. These innovations highlight a new era of wearable technology powered by advanced AI.

Subscribe to never miss an episode of Ticker – https://www.youtube.com/@weareticker


Download the Ticker app

Continue Reading

Tech

Sam Altman predicts superintelligence could appear by 2028

Sam Altman warns superintelligence may arise by 2028, advocating for global cooperation and a new governing body for AI.

Published

on

Sam Altman warns superintelligence may arise by 2028, advocating for global cooperation and a new governing body for AI.

OpenAI CEO Sam Altman has issued a bold prediction, suggesting that early forms of superintelligence could emerge as soon as 2028. Speaking at the India AI Impact Summit, Altman emphasised the urgent need for global cooperation to manage AI development responsibly.

He proposed the creation of an international oversight body for AI, similar to the International Atomic Energy Agency, to prevent misuse and ensure ethical advancements. Altman also raised concerns about authoritarian control in exchange for technological gains, highlighting the geopolitical stakes of AI.

With over 100 million users in India alone, ChatGPT has become a key part of the AI landscape. Altman acknowledged potential job disruptions but expressed optimism about society’s ability to adapt to rapid AI changes.

Subscribe to never miss an episode of Ticker – https://www.youtube.com/@weareticker


Download the Ticker app

Continue Reading

Trending Now