Connect with us
https://tickernews.co/wp-content/uploads/2023/10/AmEx-Thought-Leaders.jpg

Tech

Twitter competition finds that algorithm bias prefers white, slim, young faces

Published

on

A student researcher has found that Twitter’s image-cropping algorithm prefers faces that are slim, young and light-skinned

A graduate student at Switzerland’s EFPL university has discovered a bias in Twitter’s image-cropping ‘saliency’ algorithm.

Bogdan Kulynyc proved that the algorithm preferred faces that are light-skinned, slim and young. Twitter’s saliency algorithm decides the most interesting part of an image to crop for preview.

Transition 1

The researcher tested how the software responded to AI-generated faces

Kulynyc found that by he could manipulate the algorithm to be prefer faces by “making the person’s skin lighter or warmer and smoother; and quite often changing the appearance to that of a younger, more slim, and more stereotypically feminine person”.

He achieved this by using an AI face generator to create artificial people with varying features. He was then able to run the images through the algorithm to see which faces the software preferred.

“We should not forget that algorithmic bias is only a part of a bigger picture. Addressing bias in general and in competitions like this should not end the conversation about the tech being harmful in other ways, or by design, or by fact of existing,” said Kulynyc.

“A lot of harmful tech is harmful not because of accidents, unintended mistakes, but rather by design”

Bogdan Kulynyc

“This shows how algorithmic models amplify real-world biases and societal expectations of beauty”

Twitter’s director of software engineering and head of AI Ethics Rumman Chowdhury says the findings “showcased how applying beauty filters could game the algorithm’s internal scoring model.

“We create these filters because we think that’s what ‘beautiful’ is, and that ends up training our models and driving these unrealistic notions of what it means to be attractive.”

Twitter’s “algorithmic bug bounty”

The findings mark the conclusion of Twitter’s first “algorithmic bug bounty”. The event was part of an in-house competition at the DEF CON security conference in LA.

Twitter rewarded the student $3500 for his efforts.

Last year, Twitter came under fire for cropping out Black faces

This comes after and incident last year, where the tech giant found that the preview crop was more likely to hide Black faces.

Twitter’s director of software engineering Rumman Chowdhury said the findings illustrated that “how to crop an image is a decision best made by people”.

Natasha is an Associate Producer at ticker NEWS with a Bachelor of arts from Monash University. She has previously worked at Sky News Australia and Monash University as an Online Content Producer.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

News

Can big tech be trusted to use AI ethically?

Published

on

Big tech giants like Google, Microsoft, and IBM announced comprehensive plans to enhance AI safety measures – but is that enough to convince the everyday worker?

On this episode of Ahron & Mike Live – Can big tech be trusted with AI? “The Fall Guy” takes a hard hit, Apple’s Top Ten controversy and a robot performs surgery on a piece of corn.

Ticker’s Ahron Young & Mike Loder discuss. #featured

Continue Reading

News

OpenAI terminates AI risk protection team

Published

on

Less than a year after its inception, OpenAI has made the decision to dissolve its team dedicated to researching long-term risks associated with AI.

The team, formed with the intention of studying and mitigating potential risks stemming from advanced AI systems, was a notable part of OpenAI’s broader mission to ensure that AI is developed and used responsibly.

Dr Karen Sutherland from University of the Sunshine Coast #featured

Continue Reading

News

What are the biggest takeaways from the Second global AI summit?

Published

on

Top executives from leading tech companies committed to prioritising safety in the development and deployment of artificial intelligence technologies at the Second Global AI Summit.

During the summit, representatives from major corporations such as Google, Microsoft, and IBM outlined specific measures they will take to ensure that AI systems are developed and deployed responsibly.

Tom Finnigan from Talkingbrands.ai joins to discuss. #featured #trending

Continue Reading

Trending Now