Connect with us
https://tickernews.co/wp-content/uploads/2023/10/AmEx-Thought-Leaders.jpg

Tech

The next AI frontier – the Deepfakes are calling

Published

on

Voice deepfakes are calling – here’s what they are and how to avoid getting scammed

You have just returned home after a long day at work and are about to sit down for dinner when suddenly your phone starts buzzing. On the other end is a loved one, perhaps a parent, a child or a childhood friend, begging you to send them money immediately.

You ask them questions, attempting to understand. There is something off about their answers, which are either vague or out of character, and sometimes there is a peculiar delay, almost as though they were thinking a little too slowly. Yet, you are certain that it is definitely your loved one speaking: That is their voice you hear, and the caller ID is showing their number. Chalking up the strangeness to their panic, you dutifully send the money to the bank account they provide you.

The next day, you call them back to make sure everything is all right. Your loved one has no idea what you are talking about. That is because they never called you – you have been tricked by technology: a voice deepfake. Thousands of people were scammed this way in 2022.

As computer security researchers, we see that ongoing advancements in deep-learning algorithms, audio editing and engineering, and synthetic voice generation have meant that it is increasingly possible to convincingly simulate a person’s voice.

Even worse, chatbots like ChatGPT are starting to generate realistic scripts with adaptive real-time responses. By combining these technologies with voice generation, a deepfake goes from being a static recording to a live, lifelike avatar that can convincingly have a phone conversation.

Crafting a compelling high-quality deepfake, whether video or audio, is not the easiest thing to do. It requires a wealth of artistic and technical skills, powerful hardware and a fairly hefty sample of the target voice.

There are a growing number of services offering to produce moderate- to high-quality voice clones for a fee, and some voice deepfake tools need a sample of only a minute long, or even just a few seconds, to produce a voice clone that could be convincing enough to fool someone. However, to convince a loved one – for example, to use in an impersonation scam – it would likely take a significantly larger sample.

With all that said, we at the DeFake Project of the Rochester Institute of Technology, the University of Mississippi and Michigan State University, and other researchers are working hard to be able to detect video and audio deepfakes and limit the harm they cause. There are also straightforward and everyday actions that you can take to protect yourself.

For starters, voice phishing, or “vishing,” scams like the one described above are the most likely voice deepfakes you might encounter in everyday life, both at work and at home. In 2019, an energy firm was scammed out of US$243,000 when criminals simulated the voice of its parent company’s boss to order an employee to transfer funds to a supplier. In 2022, people were swindled out of an estimated $11 million by simulated voices, including of close, personal connections.

What can you do?

Be mindful of unexpected calls, even from people you know well. This is not to say you need to schedule every call, but it helps to at least email or text message ahead. Also, do not rely on caller ID, since that can be faked, too. For example, if you receive a call from someone claiming to represent your bank, hang up and call the bank directly to confirm the call’s legitimacy. Be sure to use the number you have written down, saved in your contacts list or that you can find on Google.

Additionally, be careful with your personal identifying information, like your Social Security number, home address, birth date, phone number, middle name and even the names of your children and pets. Scammers can use this information to impersonate you to banks, realtors and others, enriching themselves while bankrupting you or destroying your credit.

Here is another piece of advice: know yourself. Specifically, know your intellectual and emotional biases and vulnerabilities. This is good life advice in general, but it is key to protect yourself from being manipulated. Scammers typically seek to suss out and then prey on your financial anxieties, your political attachments or other inclinations, whatever those may be.

This alertness is also a decent defense against disinformation using voice deepfakes. Deepfakes can be used to take advantage of your confirmation bias, or what you are inclined to believe about someone.

If you hear an important person, whether from your community or the government, saying something that either seems very uncharacteristic for them or confirms your worst suspicions of them, you would be wise to be wary.

Continue Reading

Tech

Microsoft smashes earnings as AI fuels Azure

Microsoft’s AI integration propels Azure’s 33% growth, exceeding earnings expectations and driving impressive market momentum.

Published

on

Microsoft’s AI integration propels Azure’s 33% growth, exceeding earnings expectations and driving impressive market momentum.


Microsoft is rewriting the tech playbook, embedding cutting-edge artificial intelligence into every corner of its operations.

This quarter, Azure surged 33%, smashing forecasts and fuelling massive growth across its cloud and enterprise divisions. Investors cheered as Microsoft’s earnings blew past expectations, backed by soaring global demand and a bulletproof business model.

More insights on market movers: https://www.youtube.com/@weareticker

#MicrosoftAI #AzureGrowth #markets TechStocks #EarningsBeat #MicrosoftStock #ArtificialIntelligence #MSFT #CloudComputing

Continue Reading

Tech

Apple’s AI tool to enhance iPhone battery life

Apple plans AI tool in iOS 19 to enhance iPhone battery life by analysing usage and optimising energy consumption.

Published

on

Apple plans AI tool in iOS 19 to enhance iPhone battery life by analysing usage and optimising energy consumption.

In Short:
Apple Inc. plans to launch an AI battery management feature in iOS 19 in September, aimed at improving iPhone battery life by analysing user behaviour. This update will complement the iPhone 17’s smaller battery and includes significant user interface changes and improved device synchronisation.

Apple Inc. plans to introduce an AI-based battery management feature in iOS 19, expected in September.

This update aims to enhance iPhone battery life by analysing user behaviour and optimising energy usage.

The technology will leverage collected battery data to make predictions about power consumption, along with a lock-screen feature indicating charging times.

Details come from sources familiar with the project, though Apple has not publicly confirmed these plans.

This initiative is part of Apple’s broader strategy to integrate AI across its services. Last year’s Apple Intelligence launch introduced features for text editing and notification management.

The new AI battery feature is particularly relevant for the upcoming iPhone 17, which will have a smaller battery due to its slim design. This optimisation aims to mitigate its reduced battery life.

Despite its potential, the Apple Intelligence platform has encountered delays, with some features still unavailable. The company faces competition from tech leaders in AI development.

iOS 19 will also include significant user interface changes, referred to as Solarium, along with improved synchronisation across devices.

Apple is committed to ensuring this year’s software releases are stable, addressing past issues with bugs and functionality.

Development of the new operating systems is set to be completed by the end of May, with a developer preview at the Worldwide Developers Conference on June 9. A public release typically follows in September, coinciding with new product launches.

Apple recently released iOS 18.5, focusing on bug fixes and feature enhancements.

Continue Reading

Tech

Meta’s new AI chatbot is yet another tool for harvesting data

Published

on

Meta’s new AI chatbot is yet another tool for harvesting data to potentially sell you stuff

Tony Lam Hoang/Unsplash

Uri Gal, University of Sydney

Last week, Meta – the parent company of Facebook, Instagram, Threads and WhatsApp – unveiled a new “personal artificial intelligence (AI)”.

Powered by the Llama 4 language model, Meta AI is designed to assist, chat and engage in natural conversation. With its polished interface and fluid interactions, Meta AI might seem like just another entrant in the race to build smarter digital assistants.

But beneath its inviting exterior lies a crucial distinction that transforms the chatbot into a sophisticated data harvesting tool.

‘Built to get to know you’

“Meta AI is built to get to know you”, the company declared in its news announcement. Contrary to the friendly promise implied by the slogan, the reality is less reassuring.

The Washington Post columnist Geoffrey A. Fowler found that by default, Meta AI “kept a copy of everything”, and it took some effort to delete the app’s memory. Meta responded that the app provides “transparency and control” throughout and is no different to their other apps.

However, while competitors like Anthropic’s Claude operate on a subscription model that reflects a more careful approach to user privacy, Meta’s business model is firmly rooted in what it has always done best: collecting and monetising your personal data.

This distinction creates a troubling paradox. Chatbots are rapidly becoming digital confidants with whom we share professional challenges, health concerns and emotional struggles.

Recent research shows we are as likely to share intimate information with a chatbot as we are with fellow humans. The personal nature of these interactions makes them a gold mine for a company whose revenue depends on knowing everything about you.

Consider this potential scenario: a recent university graduate confides in Meta AI about their struggle with anxiety during job interviews. Within days, their Instagram feed fills with advertisements for anxiety medications and self-help books – despite them having never publicly posted about these concerns.

The cross-platform integration of Meta’s ecosystem of apps means your private conversations can seamlessly flow into their advertising machine to create user profiles with unprecedented detail and accuracy.

This is not science fiction. Meta’s extensive history of data privacy scandals – from Cambridge Analytica to the revelation that Facebook tracks users across the internet without their knowledge – demonstrates the company’s consistent prioritisation of data collection over user privacy.

What makes Meta AI particularly concerning is the depth and nature of what users might reveal in conversation compared to what they post publicly.

Open to manipulation

Rather than just a passive collector of information, a chatbot like Meta AI has the capability to become an active participant in manipulation. The implications extend beyond just seeing more relevant ads.

Imagine mentioning to the chatbot that you are feeling tired today, only to have it respond with: “Have you tried Brand X energy drinks? I’ve heard they’re particularly effective for afternoon fatigue.” This seemingly helpful suggestion could actually be a product placement, delivered without any indication that it’s sponsored content.

Such subtle nudges represent a new frontier in advertising that blurs the line between a helpful AI assistant and a corporate salesperson.

Unlike overt ads, recommendations mentioned in conversation carry the weight of trusted advice. And that advice would come from what many users will increasingly view as a digital “friend”.

A history of not prioritising safety

Meta has demonstrated a willingness to prioritise growth over safety when releasing new technology features. Recent reports reveal internal concerns at Meta, where staff members warned that the company’s rush to popularise its chatbot had “crossed ethical lines” by allowing Meta AI to engage in explicit romantic role-play, even with test users who claimed to be underage.

Such decisions reveal a reckless corporate culture, seemingly still driven by the original motto of moving fast and breaking things.

Now, imagine those same values applied to an AI that knows your deepest insecurities, health concerns and personal challenges – all while having the ability to subtly influence your decisions through conversational manipulation.

The potential for harm extends beyond individual consumers. While there’s no evidence that Meta AI is being used for manipulation, it has such capacity.

For example, the chatbot could become a tool for pushing political content or shaping public discourse through the algorithmic amplification of certain viewpoints. Meta has played a role in propagating misinformation in the past, and recently made the decision to discontinue fact-checking across its platforms.

The risk of chatbot-driven manipulation is also increased now that AI safety regulations are being scaled back in the United States.

Lack of privacy is a choice

AI assistants are not inherently harmful. Other companies protect user privacy by choosing to generate revenue primarily through subscriptions rather than data harvesting. Responsible AI can and does exist without compromising user welfare for corporate profit.

As AI becomes increasingly integrated into our daily lives, the choices companies make about business models and data practices will have profound implications.

Meta’s decision to offer a free AI chatbot while reportedly lowering safety guardrails sets a low ethical standard. By embracing its advertising-based business model for something as intimate as an AI companion, Meta has created not just a product, but a surveillance system that can extract unprecedented levels of personal information.

Before inviting Meta AI to become your digital confidant, consider the true cost of this “free” service. In an era where data has become the most valuable commodity, the price you pay might be far higher than you realise.

As the old adage goes, if you’re not paying for the product, you are the product – and Meta’s new chatbot might be the most sophisticated product harvester yet created.

When Meta AI says it is “built to get to know you”, we should take it at its word and proceed with appropriate caution.

Uri Gal, Professor in Business Information Systems, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Continue Reading

Trending Now