Connect with us
https://tickernews.co/wp-content/uploads/2023/10/AmEx-Thought-Leaders.jpg

Tech

The next AI frontier – the Deepfakes are calling

Published

on

Voice deepfakes are calling – here’s what they are and how to avoid getting scammed

You have just returned home after a long day at work and are about to sit down for dinner when suddenly your phone starts buzzing. On the other end is a loved one, perhaps a parent, a child or a childhood friend, begging you to send them money immediately.

You ask them questions, attempting to understand. There is something off about their answers, which are either vague or out of character, and sometimes there is a peculiar delay, almost as though they were thinking a little too slowly. Yet, you are certain that it is definitely your loved one speaking: That is their voice you hear, and the caller ID is showing their number. Chalking up the strangeness to their panic, you dutifully send the money to the bank account they provide you.

The next day, you call them back to make sure everything is all right. Your loved one has no idea what you are talking about. That is because they never called you – you have been tricked by technology: a voice deepfake. Thousands of people were scammed this way in 2022.

As computer security researchers, we see that ongoing advancements in deep-learning algorithms, audio editing and engineering, and synthetic voice generation have meant that it is increasingly possible to convincingly simulate a person’s voice.

Even worse, chatbots like ChatGPT are starting to generate realistic scripts with adaptive real-time responses. By combining these technologies with voice generation, a deepfake goes from being a static recording to a live, lifelike avatar that can convincingly have a phone conversation.

Crafting a compelling high-quality deepfake, whether video or audio, is not the easiest thing to do. It requires a wealth of artistic and technical skills, powerful hardware and a fairly hefty sample of the target voice.

There are a growing number of services offering to produce moderate- to high-quality voice clones for a fee, and some voice deepfake tools need a sample of only a minute long, or even just a few seconds, to produce a voice clone that could be convincing enough to fool someone. However, to convince a loved one – for example, to use in an impersonation scam – it would likely take a significantly larger sample.

With all that said, we at the DeFake Project of the Rochester Institute of Technology, the University of Mississippi and Michigan State University, and other researchers are working hard to be able to detect video and audio deepfakes and limit the harm they cause. There are also straightforward and everyday actions that you can take to protect yourself.

For starters, voice phishing, or “vishing,” scams like the one described above are the most likely voice deepfakes you might encounter in everyday life, both at work and at home. In 2019, an energy firm was scammed out of US$243,000 when criminals simulated the voice of its parent company’s boss to order an employee to transfer funds to a supplier. In 2022, people were swindled out of an estimated $11 million by simulated voices, including of close, personal connections.

What can you do?

Be mindful of unexpected calls, even from people you know well. This is not to say you need to schedule every call, but it helps to at least email or text message ahead. Also, do not rely on caller ID, since that can be faked, too. For example, if you receive a call from someone claiming to represent your bank, hang up and call the bank directly to confirm the call’s legitimacy. Be sure to use the number you have written down, saved in your contacts list or that you can find on Google.

Additionally, be careful with your personal identifying information, like your Social Security number, home address, birth date, phone number, middle name and even the names of your children and pets. Scammers can use this information to impersonate you to banks, realtors and others, enriching themselves while bankrupting you or destroying your credit.

Here is another piece of advice: know yourself. Specifically, know your intellectual and emotional biases and vulnerabilities. This is good life advice in general, but it is key to protect yourself from being manipulated. Scammers typically seek to suss out and then prey on your financial anxieties, your political attachments or other inclinations, whatever those may be.

This alertness is also a decent defense against disinformation using voice deepfakes. Deepfakes can be used to take advantage of your confirmation bias, or what you are inclined to believe about someone.

If you hear an important person, whether from your community or the government, saying something that either seems very uncharacteristic for them or confirms your worst suspicions of them, you would be wise to be wary.

Continue Reading

Tech

OpenAI Unveils ChatGPT Atlas: The Future of Browsing?

Published

on

 

OpenAI has taken another giant leap forward with the launch of ChatGPT Atlas — an AI-powered web browser that could redefine how people search, explore, and interact online. Investors and competitors are watching closely as this new technology challenges the dominance of traditional browsers like Google Chrome.

With ChatGPT Atlas, users may soon experience a web that feels less like typing into a search box and more like conversing with an intelligent assistant. The integration of AI could make browsing faster, more intuitive, and more personalised than ever before — but it also raises serious questions about privacy and data use.

As AI becomes more deeply embedded in the digital world, ChatGPT Atlas could represent the next major step toward a fully AI-driven online experience. What does this mean for users — and for the tech giants trying to keep up?

Continue Reading

Tech

OpenAI limits deepfakes after Bryan Cranston’s concerns

OpenAI protects against deepfakes on Sora 2 after Bryan Cranston and SAG-AFTRA raise concerns over unauthorized AI-generated content

Published

on

OpenAI protects against deepfakes on Sora 2 after Bryan Cranston and SAG-AFTRA raise concerns over unauthorised AI-generated content

video
play-sharp-fill
In Short:
– OpenAI partners with Bryan Cranston and unions to combat deepfakes on its Sora app.
– The app now includes options for people to control their likenesses and voices.
OpenAI announced it will work with Bryan Cranston, SAG-AFTRA, and actor unions to combat deepfakes on its AI video app, Sora.Cranston voiced concerns after unauthorized AI-generated clips featuring his likeness emerged after Sora 2’s launch in late September. He showed gratitude to OpenAI for taking steps to safeguard actors’ rights to control their likenesses.

Banner

The partnership aims to enhance protections against unauthorized AI content. The Creative Artists Agency and United Talent Agency had previously criticized OpenAI, citing risks to their clients’ intellectual property.

Last week, OpenAI blocked disrespectful videos of Martin Luther King Jr. at the request of his estate, following similar pressures. Zelda Williams also requested the public refrain from sending her AI-generated clips of her late father, Robin Williams.

Policy Changes

Following tensions post-launch, CEO Sam Altman revised Sora’s policy to give rights holders greater control of their likenesses.

The app now allows individuals to opt-out, reflecting OpenAI’s commitment to respond quickly to concerns from performers.

OpenAI backs the NO FAKES Act, supporting legislation that aims to protect individuals from unauthorized AI-generated representations.

OpenAI is focused on ensuring performers’ rights are respected regarding the misuse of their voices and likenesses. Altman reiterated the company’s dedication to these protections.


Download the Ticker app

Continue Reading

Tech

Major apps down as AWS experiences global outage

AWS outage disrupts Fortnite, Snapchat and multiple services globally

Published

on

AWS outage disrupts Fortnite, Snapchat and multiple services globally

video
play-sharp-fill
In Short:
– AWS outage on Monday disrupted major apps like Fortnite, Snapchat, and affected several global companies.
– UK companies including Lloyds Bank and Vodafone reported issues due to the AWS outage.

Amazon’s AWS experienced a significant outage on Monday, impacting major apps including Fortnite and Snapchat. The disruption affected connectivity for numerous companies globally.AWS reported increased error rates and latencies across multiple services and is attempting to recover quickly.

Banner

The outage marks the first significant internet disruption since a previous incident last year that impacted essential technology systems globally. AWS offers on-demand computing and storage services and is vital for many websites and platforms.

Multiple companies reported disruptions, including AI startup Perplexity, cryptocurrency exchange Coinbase, and trading app Robinhood. Perplexity’s CEO confirmed on X that the outages were linked to AWS issues.

Amazon’s shopping site, Prime Video, and Alexa services also faced difficulties, according to Downdetector. Other affected platforms included popular gaming applications like Clash Royale and financial services such as Venmo and Chime.

Uber competitor Lyft’s app was reported down for numerous users in the U.S. Messaging platform Signal also acknowledged connection problems stemming from the AWS outage.

British Companies

In the UK, Lloyds Bank, Bank of Scotland, and telecom services provider Vodafone were notably affected. The HMRC’s website also encountered issues during this outage.

Elon Musk stated that his platform, X, remained operational despite the widespread disruptions.


Download the Ticker app

Continue Reading

Trending Now