As the internet excites over AI-generated art, concerns grow over how the technology reflects biased data
A seagull attacking a man with spaghetti, a skateboarding dinosaur, and a cup of coffee that contains the universe.
These are just some of the odd prompts that people have given to new AI systems, for an often unusual, yet incredibly detailed image in return.
Whilst majority of the AI art you are likely seeing on social media comes from Open-A-I’s, DALL E mini, other notable artists are Open-A-I’s DALL-E 2, and Google Research’s Imagen
Google research shows the technology appears to involve “several social biases and stereotypes”.
But experts fear these systems are also capable of producing disinformation based off the gender and cultural biases from the data they feed off.
An OpenAI online document titled ‘Risks and Limitations’, which shows these biases with an example of how a text description of a CEO, for instance, only shows images of predominantly white men.
DALL-E mini text prompt for “CEO”
Technology has “an overall bias towards generating images of people with lighter skin tones and a tendency for images portraying different professions to align with Western gender stereotypes.”
GOOGLE research, Brain team
AI bias has become an increasing concern with multiple industries relying on the technology, including the Ukrainian government who are using AI for facial recognition.
Researchers are still learning how to measure bias in their technology and expect to make changes to systems as it learns more.