Connect with us
https://tickernews.co/wp-content/uploads/2023/10/AmEx-Thought-Leaders.jpg

Tech

The next AI frontier – the Deepfakes are calling

Published

on

Voice deepfakes are calling – here’s what they are and how to avoid getting scammed

You have just returned home after a long day at work and are about to sit down for dinner when suddenly your phone starts buzzing. On the other end is a loved one, perhaps a parent, a child or a childhood friend, begging you to send them money immediately.

You ask them questions, attempting to understand. There is something off about their answers, which are either vague or out of character, and sometimes there is a peculiar delay, almost as though they were thinking a little too slowly. Yet, you are certain that it is definitely your loved one speaking: That is their voice you hear, and the caller ID is showing their number. Chalking up the strangeness to their panic, you dutifully send the money to the bank account they provide you.

The next day, you call them back to make sure everything is all right. Your loved one has no idea what you are talking about. That is because they never called you – you have been tricked by technology: a voice deepfake. Thousands of people were scammed this way in 2022.

As computer security researchers, we see that ongoing advancements in deep-learning algorithms, audio editing and engineering, and synthetic voice generation have meant that it is increasingly possible to convincingly simulate a person’s voice.

Even worse, chatbots like ChatGPT are starting to generate realistic scripts with adaptive real-time responses. By combining these technologies with voice generation, a deepfake goes from being a static recording to a live, lifelike avatar that can convincingly have a phone conversation.

Crafting a compelling high-quality deepfake, whether video or audio, is not the easiest thing to do. It requires a wealth of artistic and technical skills, powerful hardware and a fairly hefty sample of the target voice.

There are a growing number of services offering to produce moderate- to high-quality voice clones for a fee, and some voice deepfake tools need a sample of only a minute long, or even just a few seconds, to produce a voice clone that could be convincing enough to fool someone. However, to convince a loved one – for example, to use in an impersonation scam – it would likely take a significantly larger sample.

With all that said, we at the DeFake Project of the Rochester Institute of Technology, the University of Mississippi and Michigan State University, and other researchers are working hard to be able to detect video and audio deepfakes and limit the harm they cause. There are also straightforward and everyday actions that you can take to protect yourself.

For starters, voice phishing, or “vishing,” scams like the one described above are the most likely voice deepfakes you might encounter in everyday life, both at work and at home. In 2019, an energy firm was scammed out of US$243,000 when criminals simulated the voice of its parent company’s boss to order an employee to transfer funds to a supplier. In 2022, people were swindled out of an estimated $11 million by simulated voices, including of close, personal connections.

What can you do?

Be mindful of unexpected calls, even from people you know well. This is not to say you need to schedule every call, but it helps to at least email or text message ahead. Also, do not rely on caller ID, since that can be faked, too. For example, if you receive a call from someone claiming to represent your bank, hang up and call the bank directly to confirm the call’s legitimacy. Be sure to use the number you have written down, saved in your contacts list or that you can find on Google.

Additionally, be careful with your personal identifying information, like your Social Security number, home address, birth date, phone number, middle name and even the names of your children and pets. Scammers can use this information to impersonate you to banks, realtors and others, enriching themselves while bankrupting you or destroying your credit.

Here is another piece of advice: know yourself. Specifically, know your intellectual and emotional biases and vulnerabilities. This is good life advice in general, but it is key to protect yourself from being manipulated. Scammers typically seek to suss out and then prey on your financial anxieties, your political attachments or other inclinations, whatever those may be.

This alertness is also a decent defense against disinformation using voice deepfakes. Deepfakes can be used to take advantage of your confirmation bias, or what you are inclined to believe about someone.

If you hear an important person, whether from your community or the government, saying something that either seems very uncharacteristic for them or confirms your worst suspicions of them, you would be wise to be wary.

Continue Reading

Tech

Airbus A320 fleet faces software upgrade due to risk

Airbus alerts A320 operators to urgent software fix after JetBlue incident raises safety concerns

Published

on

Airbus alerts A320 operators to urgent software fix after JetBlue incident raises safety concerns

video
play-sharp-fill
In Short:
– Airbus warns over half of A320 fleet needs software fixes due to potential data corruption risks.
– Affected airlines must complete upgrades before next flights, with operational disruptions anticipated during a busy travel season.

Airbus has issued a warning regarding its A320 fleet, indicating that over half of the active jets will require a software fix.

It follows a recent incident involving a JetBlue Airways aircraft, where “intense solar radiation” was found to potentially corrupt data crucial for flight control system operation.

The European plane manufacturer stated that around 6,500 jets may be affected. A regulation mandates that the software upgrade must occur before the next scheduled flight.

Banner

Operational disruptions for both passengers and airlines are anticipated. The issue arose from an incident on October 30, where a JetBlue flight experienced a computer malfunction that resulted in an uncommanded descent. Fortunately, no injuries occurred, but the malfunction of an automated computer system was identified as a contributing factor.

Airlines, including American Airlines Group, have begun to implement the required upgrades.

The majority of affected jets can receive an uncomplicated software update, although around 1,000 older models will necessitate an actual hardware upgrade, requiring grounding during maintenance.

Hungarian airline Wizz Air has also initiated necessary maintenance for compliance, potentially affecting flights. This announcement has surfaced during a busy travel season in the US, with many facing delays due to other factors as well.

Regulatory Response

The European Union Aviation Safety Agency has mandated that A320 operators replace or modify specific elevator-aileron computers. The directive follows the JetBlue incident, where a malfunction led to a temporary loss of altitude.

Airbus’s fix applies to both the A320 and A320neo models, representing a vital response in ensuring aircraft safety.


Download the Ticker app

Continue Reading

Tech

China blocks ByteDance from using Nvidia chips in new data centres

China blocks ByteDance from using Nvidia chips, tightening tech control and pushing for domestic AI innovation amid U.S. restrictions.

Published

on

China blocks ByteDance from using Nvidia chips, tightening tech control and pushing for domestic AI innovation amid U.S. restrictions.


Chinese regulators have moved to block ByteDance from deploying Nvidia chips in newly built data centres, tightening control over foreign technology used by major Chinese tech giants. The decision comes after ByteDance made substantial purchases of Nvidia hardware amid fears of shrinking supply from the United States.

Washington has already restricted the sale of advanced chips to China, allowing only weakened versions into the market. Beijing’s latest move reflects its push to reduce dependence on U.S. technology and accelerate home-grown AI innovation.

The ban places operational and financial pressure on ByteDance, which must now work around a growing pile of Nvidia chips it is no longer allowed to use. Domestic suppliers like Huawei are expected to step in as China intensifies its pursuit of tech self-reliance.
Subscribe to never miss an episode of Ticker – https://www.youtube.com/@weareticker

#ChinaTech #ByteDance #Nvidia #AIIndustry #USChinaTech #ChipRestrictions #Huawei #TechPolicy


Download the Ticker app

Continue Reading

Tech

OpenAI launches shopping research tool for ChatGPT users

OpenAI launches shopping research tool to enhance e-commerce experience ahead of holiday season spending boost

Published

on

OpenAI launches shopping research tool to enhance e-commerce experience ahead of holiday season spending boost

video
play-sharp-fill
In Short:
– OpenAI’s “shopping research” tool helps users find detailed shopping guides tailored to their preferences.
– Users can access Instant Checkout for purchases while ensuring user chats are not shared with retailers.
OpenAI has launched a new tool called “shopping research,” coinciding with an increase in consumer spending ahead of the holiday season.This tool is aimed at ChatGPT users seeking comprehensive shopping guides that detail top products, key differences, and the latest retailer information.

Users can customise their guides based on budget, features, and recipients. OpenAI notes that while the tool takes a few minutes to generate responses, users can still use ChatGPT for quicker queries like price checks.

Banner

When users ask specific prompts, such as finding a quiet cordless stick vacuum or a gift for a niece who loves art, the shopping research tool will appear automatically. It can also be accessed via the menu.

Shopping Research

OpenAI has been expanding its e-commerce capabilities, with the introduction of the Instant Checkout feature in September, enabling purchases directly through ChatGPT.

Soon, users of the shopping research tool will also be able to use Instant Checkout for making purchases.

OpenAI assures that shopping research results are derived from publicly available retail websites and will not disclose user chats to retailers, although it does warn that inaccuracies may occur in product availability and pricing.

Shopping research is now available to OpenAI’s Free, Go, Plus, and Pro users logged into ChatGPT.


Download the Ticker app

Continue Reading

Trending Now