Connect with us
https://tickernews.co/wp-content/uploads/2023/10/AmEx-Thought-Leaders.jpg

Tech

How researchers say you can avoid an online scam

Published

on

Keeping up with the latest digital cons is exhausting. Fraudsters always seem to be one step ahead. But our study found there is one simple thing you can do to drastically reduce your chances of losing money to web scams: slow down.

In fact, among the various techniques used by scammers, creating a sense of urgency or the need to act or respond quickly is probably the most damaging. As with many legitimate sales, acting fast reduces your ability to think carefully, evaluate information and make a careful decision.

The COVID lockdowns made us all more reliant on online services such as shopping and banking. Quick to take advantage of this trend, scammers have since increased the rate and spectrum of online fraud. Cybersecurity company F5 found phishing attacks alone increased by over 200% during the height of the global pandemic, compared to the yearly average.

One fraud type many people fall victim to is fake websites (spoof legitimate business or government websites). According to a nonprofit that handles consumer complaints Better Business Bureau, fake websites are one of the leading reported scams. They caused estimated retail losses of approximately US$380 million (£316 million) in the US in 2022. Actually, losses are probably far higher because many cases go unreported.

How to know

We developed a series of experiments to evaluate what factors impact people’s ability to distinguish between real and fake websites. In our studies, participants viewed screenshots of real and fake versions of six websites: Amazon, ASOS, Lloyds Bank, the World Health Organisation COVID-19 donation website, PayPal and HMRC. The number of participants varied, but we had more than 200 in each experiment.

Each study involved asking participants whether they thought the screenshots showed authentic websites or not. Afterwards, they also took tests to evaluate their internet knowledge and analytical reasoning. Earlier research has shown analytical reasoning impacts our ability to tell between real and fake news and phishing emails.

People tend to employ two types of information processing – system one and system two. System one is quick, automatic, intuitive and related to our emotions. We know experts rely on system one to make quick decisions. System two is slow, conscious and laborious. The ability to perform well on analytical reasoning tasks has been associated with system two but not system one thinking. So we used analytical reasoning tasks as a proxy to help us tell whether people are leaning more on system one or two thinking.

An example of one of the questions in our analytical reasoning test is: “A bat and ball together cost $1.10. The bat costs $1.00 more than the ball. How much does the ball cost?”

The big reveal

Our results showed higher analytical reasoning ability was linked to a better ability to tell fake and real websites apart.

Other researchers have found time pressure reduces people’s ability to detect phishing emails. It also tends to engage system one processing rather than system two. Scammers do not want us to carefully evaluate the information but engage emotionally with it. So our next step was to give people less time (about 10 seconds compared to 20 seconds in the first experiment) to do the task.

This time we used a new set of participants. We found participants who had less time to judge the credibility of a webpage showed poorer ability to discriminate between real and fake websites. They were about 50% less accurate compared to the group who had 20 seconds to decide whether a website was fake or real.

In our final study, we provided a new set of participants with 15 tips on how to spot fake websites (for instance, check the domain name). We also asked half of them to prioritise accuracy and take as much time as they needed while the other half were instructed to work as quickly as possible. Working quickly rather than accurately was linked to worse performance, and to poor recall of the 15 tips we provided earlier.

Is it for real?

With increasing internet use among all age groups, scammers are capitalising on peoples’ tendencies to use more intuitive information processing mechanisms to evaluate whether a website is legitimate. Scammers often design their solicitations in a way that encourages people to act quickly because they know that decisions made under such conditions are in their favour. For example, advertising that a discount is ending soon.

Muck of the advice about how to identify fake websites suggests you carefully examine the domain name, check for the padlock symbol, use website checkers such as Get Safe Online, look for spelling errors, and be wary of deals that sound too good to be true. These suggestions, obviously, require time and deliberate action. Indeed, possibly the best advice you could follow is: slow down.

Continue Reading

Tech

Why ChatGPT’s latest update will be a game-changer for AI adoption

Published

on

OpenAI has introduced new updates to ChatGPT, aiming for a more direct and concise conversational style.

  • GPT-4 Turbo is now available to paid ChatGPT users only.

  • “gpt-4-turbo-2024-04-09” will bring greatly enhanced writing, math, logical reasoning and coding.

  • “When writing with ChatGPT responses will be more direct, less verbose and use more conversational language,” OpenAI writes in a post on X.

 

These changes come in response to user feedback and a desire to improve the efficiency of interactions with the AI model.

Streamlined AI

The adjustments focus on reducing verbosity in ChatGPT’s responses, ensuring that the AI communicates with users more effectively.

By streamlining its language, OpenAI hopes to enhance user experience across various applications, from customer service chatbots to language learning platforms.

This move aligns with OpenAI’s ongoing efforts to refine its models and make them more adaptable to diverse communication needs.

“For example, when writing with ChatGPT, responses will be more direct, less verbose, and use more conversational language.”, writes OpenAI on X.

Continue Reading

Tech

Meta’s plans to hide nudity from Instagram DMs

Published

on

Instagram, owned by Meta, announced plans to introduce features that will blur messages containing nudity in an effort to protect teenagers and prevent potential scammers from targeting them.

Meta’s decision comes amidst growing concerns regarding harmful content on its platforms, especially concerning the mental well-being of young users.

The technology giant has faced increasing scrutiny in both the United States and Europe, with accusations that its apps contribute to addiction and exacerbate mental health issues among adolescents.

According to Meta, the new protection feature for Instagram’s direct messages will utilise on-device machine learning to analyse whether an image sent through the service contains nudity.

This feature will be enabled by default for users under the age of 18, with adults being encouraged to activate it as well.

Meta said that because the image analysis occurs on the device itself, the nudity protection feature will function even in end-to-end encrypted chats, where Meta does not have access to the content unless it is reported by users.

unsplash_image @ Unsplash

Direct messages

Unlike Meta’s Messenger and WhatsApp apps, direct messages on Instagram are not currently encrypted.

However, Meta has stated its intention to implement encryption for Instagram’s direct messages in the future.

Additionally, Meta revealed that it is developing technology to identify accounts potentially involved in sextortion scams. The company is also testing new pop-up messages to alert users who may have interacted with such accounts.

This latest move follows Meta’s announcement in January that it would restrict more content from teens on Facebook and Instagram, aiming to reduce their exposure to sensitive topics such as suicide, self-harm, and eating disorders.

Meta’s efforts to enhance safety measures come amid legal challenges and regulatory scrutiny.

Attorneys general from 33 U.S. states, including California and New York, filed a lawsuit against the company in October, alleging repeated misrepresentation of the dangers associated with its platforms.

Continue Reading

Tech

Fake AI law firms avert copyright for SEO gains

Published

on

It’s been revealed that fake AI-driven law firms are resorting to sending fabricated DMCA (Digital Millennium Copyright Act) infringement notices to website owners.

These deceptive practices aim to generate artificial Search Engine Optimization gains through the manipulation of backlinks, casting a shadow on the integrity of online legal proceedings.

The issue was brought to attention when Ernie Smith, a prominent writer behind the newsletter Tedium, found himself targeted by one such fraudulent firm named “Commonwealth Legal.” Representing the “Intellectual Property division” of Tech4Gods, the purported law firm accused Smith of copyright infringement over a photo of a keyfob sourced from Unsplash, a legitimate photo service.

The firm demanded immediate action to add a credit link to Tech4Gods and threatened further legal action if compliance was not met within five business days.

However, a closer examination revealed glaring inconsistencies with Commonwealth Legal’s legitimacy.

Despite claiming to be based in Arizona, the firm’s website domain was registered with a Canadian IP location, raising doubts about its authenticity.

AI-generated faces

The attorneys listed on the website displayed eerie characteristics common to AI-generated faces, casting doubt on their existence.

Further investigation revealed that these fake law firms resort to such deceitful tactics to manipulate backlinks, which are crucial for improving a website’s search engine ranking.

Backlinks from reputable sites contribute to SEO, and exploiting this vulnerability, fake firms attempt to boost their clients’ online presence through artificial means.

The sinister nature of these actions extends beyond mere SEO manipulation.

They undermine the trust in legal proceedings and pose a threat to the integrity of online content. The emergence of AI-driven deception in legal matters underscores the need for vigilant scrutiny and robust measures to combat such fraudulent activities.

Continue Reading
Live Watch Ticker News Live
Advertisement

Trending Now

Copyright © 2024 The Ticker Company