Connect with us

Tech

The next AI frontier – the Deepfakes are calling

Published

on

Voice deepfakes are calling – here’s what they are and how to avoid getting scammed

You have just returned home after a long day at work and are about to sit down for dinner when suddenly your phone starts buzzing. On the other end is a loved one, perhaps a parent, a child or a childhood friend, begging you to send them money immediately.

You ask them questions, attempting to understand. There is something off about their answers, which are either vague or out of character, and sometimes there is a peculiar delay, almost as though they were thinking a little too slowly. Yet, you are certain that it is definitely your loved one speaking: That is their voice you hear, and the caller ID is showing their number. Chalking up the strangeness to their panic, you dutifully send the money to the bank account they provide you.

The next day, you call them back to make sure everything is all right. Your loved one has no idea what you are talking about. That is because they never called you – you have been tricked by technology: a voice deepfake. Thousands of people were scammed this way in 2022.

As computer security researchers, we see that ongoing advancements in deep-learning algorithms, audio editing and engineering, and synthetic voice generation have meant that it is increasingly possible to convincingly simulate a person’s voice.

Even worse, chatbots like ChatGPT are starting to generate realistic scripts with adaptive real-time responses. By combining these technologies with voice generation, a deepfake goes from being a static recording to a live, lifelike avatar that can convincingly have a phone conversation.

Crafting a compelling high-quality deepfake, whether video or audio, is not the easiest thing to do. It requires a wealth of artistic and technical skills, powerful hardware and a fairly hefty sample of the target voice.

There are a growing number of services offering to produce moderate- to high-quality voice clones for a fee, and some voice deepfake tools need a sample of only a minute long, or even just a few seconds, to produce a voice clone that could be convincing enough to fool someone. However, to convince a loved one – for example, to use in an impersonation scam – it would likely take a significantly larger sample.

With all that said, we at the DeFake Project of the Rochester Institute of Technology, the University of Mississippi and Michigan State University, and other researchers are working hard to be able to detect video and audio deepfakes and limit the harm they cause. There are also straightforward and everyday actions that you can take to protect yourself.

For starters, voice phishing, or “vishing,” scams like the one described above are the most likely voice deepfakes you might encounter in everyday life, both at work and at home. In 2019, an energy firm was scammed out of US$243,000 when criminals simulated the voice of its parent company’s boss to order an employee to transfer funds to a supplier. In 2022, people were swindled out of an estimated $11 million by simulated voices, including of close, personal connections.

What can you do?

Be mindful of unexpected calls, even from people you know well. This is not to say you need to schedule every call, but it helps to at least email or text message ahead. Also, do not rely on caller ID, since that can be faked, too. For example, if you receive a call from someone claiming to represent your bank, hang up and call the bank directly to confirm the call’s legitimacy. Be sure to use the number you have written down, saved in your contacts list or that you can find on Google.

Additionally, be careful with your personal identifying information, like your Social Security number, home address, birth date, phone number, middle name and even the names of your children and pets. Scammers can use this information to impersonate you to banks, realtors and others, enriching themselves while bankrupting you or destroying your credit.

Here is another piece of advice: know yourself. Specifically, know your intellectual and emotional biases and vulnerabilities. This is good life advice in general, but it is key to protect yourself from being manipulated. Scammers typically seek to suss out and then prey on your financial anxieties, your political attachments or other inclinations, whatever those may be.

This alertness is also a decent defense against disinformation using voice deepfakes. Deepfakes can be used to take advantage of your confirmation bias, or what you are inclined to believe about someone.

If you hear an important person, whether from your community or the government, saying something that either seems very uncharacteristic for them or confirms your worst suspicions of them, you would be wise to be wary.

Continue Reading

Tech

Elon Musk and experts call for six-month pause on A.I.

Published

on

The Future of Life Institute fears there may be potential risks to society

Elon Musk and a group of leading A.I. experts are calling for a six-month pause on developing systems, more powerful than OpenAI latest version of GPT-4.

The Future of Life Institute fears there may be potential risks to society.

In an open letter signed by some of the biggest and influential minds in tech, the Institute wants the pause so frameworks can be constructed to better handle A.I.

“Powerful A.I. systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the open letter said.

British computer scientist Stewart Russell is a signatory to the open letter, and he explains what is occurring in the sector that scares him.

“With what is gestating in computer and research labs, is for general purpose A.I,” Russell declared recently. “A.I. that can do anything that the human mind can be turned to.

“Because of the enormous advantages machines have over humans, I expect general purpose A.I. will far exceed human capabilities in almost every dimension.”

Continue Reading

Business

Alibaba shares soar as company breaks into parts

Published

on

Alibaba shares have soared as company executives announce a business shake-up

 
It’s been a good day for investors in Chinese tech giant Alibaba.

Shares in the company soared as executives announced a plan to break the business into parts.

Alibaba’s commerce leader says he will split the $220 billion empire into six individual units.

The major restructuring is the company’s biggest in 24 years.

Alibaba shares gained more than 14 per cent in New York and were up 13 per cent in Hong Kong.

The move follows reports Alibaba founder Jack Ma resurfaced in China this week after a long absence.

The units will have their own chief executives and boards of directors.

They will be allowed to raise capital and seek stock market listings.

Alibaba says the units will “capture opportunities in their respective markets and industries, thereby unlocking the value of Alibaba Group’s respective businesses”.

“The market is the best litmus test, and each business group and company can pursue independent fundraising and IPOs when they are ready,” says chief executive Daniel Zhang. #trending #featured

Continue Reading

Tech

Facial recognition has been used a million times by U.S. police

Published

on

Controversial facial recognition has been used a million times by police to help track criminals

As facial recognition becomes more prominent, the founder of tech firm Clearview says his company has run nearly a million searches for U.S. police.

It’s also been revealed the company has scraped 30 billion images from platforms such as Facebook and Instagram, taken without users’ permissions.

The company has been fined numerous times in Europe and countries like Australia for breaches of privacy laws.

In the U.S., critics say the use of Clearview by authorities puts everyone into a “police line-up”.

The company’s high-tech system allows law enforcement to upload a photo of a face and find matches in a database comprising of billions of images it has collected.

It then provides links to where matching images appear online.

The tool is considered to be one of the world’s most powerful and accurate.

While the company is banned from selling its services to most U.S. companies, there is an exemption for police.

Continue Reading
Live Watch Ticker News Live
Advertisement

Trending Now

Copyright © 2023 The Ticker Company PTY LTD