The Visual Lies of AI

The Visual Lies of AI

The Visual Lies of AI

Media images depicting AI as sensitive, white humanoid robots mask the responsibility of the humans who actually develop this technology. / Adobe Stock (generado con IA)

Federico Kukso / SINC

As artificial intelligence expands in every corner of our lives, a cliché has been consolidated: that of visually representing these systems as white humanoid robots, flashing brains or inspired on science fiction patterns. An increasing group of specialists rejects this iconography loaded with historical biases of gender, ethnicity and religion that generates unrealistic expectations and masks the social effects of these technologies.

On April 25, 2018, the European Commission published a document that marked the beginning of its path towards the world’s first artificial intelligence regulatory law. It detailed the recommendations presented by a group of experts, listed the steps to follow, and proposed ethical guidelines for trustworthy AI. Without attracting much attention, an attractive image accompanied the text: a human hand and a robotic hand – shiny and metallic – stretched out in front of a blue and gray background, almost touching with their fingertips

Inspired on Michelangelo’s The Creation of Adam, a fresco that has adorned the ceiling of the Sistine Chapel in the Vatican since the 16th century, this composition presents AI as a divine creative force or as a spark, a connection between the human and the divine, has become one of the official posters of artificial intelligence.

Images recreating Michelangelo’s ‘The Creation of Adam’ are often used to illustrate AI, as the European Commission did in 2018./ Michelangelo/European Comission.

You see it over and over again on news sites and magazines, in advertising, in book covers, in business releases, in course posters and conferences, alongside quaint portraits of ever-white humanoid robots, bright blue brains, circuit boards, binary code and hackneyed references to sci-fi characters like ‘Terminator’.

They make up the current iconography of AI, its visual translation, increasingly criticized by specialists who warn about its dangers and the stereotyped and potentially discriminatory assumptions that drag on gender, ethnicity and religion.

The current iconography of AI with white humanoid robots is increasingly criticized by specialists, who warn about stereotypical and discriminatory ideas

“These images completely distort the realities of AI,” warns SINC ethics researcher Tania Duarte, founder of We and AI, a non-profit organization in the United Kingdom that seeks to improve artificial intelligence literacy for social inclusion and boost critical thinking. “They reinforce myths about AI -he underlines-, false ideas about the infallibility, sensitivity and religious mysticism that surrounds these technologies and do not help the public to develop an understanding of what is at stake”.


The power of the image

As writer Susan Sontag explained long ago, images are powerful elements.  They frame our perception of the world, confirm, reinforce or dispel stereotypes.  They influence the way society thinks and understands subjects they portray.  In the case of AI, certain illustrations exaggerate the capabilities of these increasingly ubiquitous computer systems.

Others, on the other hand, sow fear, increase public distrust, mask the responsibility of humans and perpetuate an understanding too limited and fearful of AI but also mysterious, not showing its real context, nor its processes and applications or limitations.  And they deflect the debate on the potentially significant implications for research, funding, regulation and reception of these increasingly ubiquitous systems.

To combat these clichés and distortions, Duarte and a team of over a hundred engineers, artists, sociologists and ethics specialists are driving a program known as Better Images of AI.

To combat these clichés and distortions, Duarte and a team of over a hundred engineers, artists, sociologists and ethics specialists are driving a program known as Better Images of AI.

With the support of the Alan Turing Institute, the Leverhulme Centre for the Future of Intelligence and the BBC’s research and development department, expose how these dominant representations reinforce misconceptions and limit public understanding of the use and operation of AI systems.

We need images that more realistically portray the technology and the people behind it and point out its strengths, weaknesses, contexts and applications,” say its promoters, offering a free repository of best AI images and a guide to tips for responsible use.

An image from the ‘Better Images of AI’ group. / Alan Warburton/BBC/Better Images of AI


Imaging factories

An important part of the photographs and illustrations used in advertising, corporate marketing, website design and also in news sites come from “image factories”: the business of stock photographs, a multi-billion dollar global industry, dominated by a handful of transnational corporations-Getty Images, Corbis, Alamy, Shutterstock, Adobe Stock, Science Photo Library (SPL), among others- with a powerful force in contemporary visual culture.

As researchers Gaudenz Urs Metzger and Tina Braun from the Higher School of the Arts in Zurich (Switzerland) point out in a study published in the Journal of Death and Dying, these images influence how a subject is imagined, as well as what can be said and not said in a society.

As media companies invest less and less in photographers and infographic designers, stock images expand their presence.

As media companies invest less and less in photographers and infographic designers, stock images expand their presence

Usually, they are montages, graphics or generic illustrations, with a realistic style, although more symbolic than documentary, that seek to attract the attention of the reader by visually representing complex issues such as the molecular scissors of the CRISPR technique of genetic editing, the subatomic world, the inside of the human body, quantum computing, the blockchain or cloud computing.

In the case of AI, the white humanoid robot has recently been elevated to the role of visual ambassador of these complex technologies by direct influence of popular culture.  As recorded by the AI Narratives Project of the Royal Society of London, there is a strong trend to conceive intelligent machines in human form.


An AI with hypersexualized forms

“When people imagine other intelligent beings, these imaginations tend to take on a humanoid form,” highlights the main document of this initiative.  “A consequence of this anthropomorphism is that AI systems are often represented with gender: their physical forms are often not androgynous, but have the stereotypical secondary sexual characteristics of men or women.  In fact, they are often hypersexualized: they have exaggeratedly muscular male bodies and aggressive tendencies, like Terminator, or conventionally beautiful female forms like Ava in the movie Ex Machina”.

AI and its branches, such as machine learning, are not simple subjects to explain or illustrate. But imagining these technologies as androids is problematic: they misinform, raising too high expectations, and suggests that their ultimate goal is to replace humans and makes it difficult to consider their real benefits. Moreover, the more human these representations are seen, the more ethnically white their characteristics are.

To imagine these technologies as uninformed androids, raises high expectations, suggests that their purpose is to replace humans and makes it difficult to consider their real benefits

Hence Stephen Cave and Kanta Dihal, researchers at the University of Cambridge (UK), point out in their essay The Whiteness of AI that the representation of AI suffers from a racial problem.  “To imagine machines that are intelligent, professional or powerful is to imagine white machines because the white racial framework attributes these attributes predominantly to whiteness,” say the authors of Imagining AI: How the World Sees Intelligent Machines.

“In European and North American societies, whiteness is so normalized that it becomes largely invisible. By focusing whiteness as the predetermined color of the future, these images contribute to imagine a technological future that excludes people of color in the same way that the big technology companies do today,” they say.


Clichés and confusion about AI

The visual clichés are several: circuit boards, descending binary code, brilliant brains located in a dark and empty space, isolated from the needs of a body, that is, a clear allusion to intelligence, while much of the AI and machine learning that are used today are far removed from human intelligence.  “People who don’t understand what AI is can’t defend the ways technology should or shouldn’t impact society,” says Duarte.

Bright brains and white robots make up the current iconography of AI. / Wiley/Routledge/Apress

In addition to the prevalence of white robots, the favorite color of technology companies to represent their systems is blue, usually associated with intelligence, confidence, seriousness, efficiency but also with masculinity, as noted by historian Alexandra Grieser in Blue Brains: Aesthetic Ideologies and the Formation of Knowledge Between Religion and Science.

“There is a big gap between the reality of AI and the perception of the general public, which is fueled by media representations and films that often present AI as autonomous robots with almost human capabilities”, says Mexican software developer Yadira Sánchez, a consultant for Better Images of AI.

“This distortion is worrying because it distracts people from effectively understanding and engaging in crucial debates about how AI is reshaping essential areas such as healthcare, agriculture and state surveillance.  In addition, he adds, these inaccurate images create very optimistic or very pessimistic futuristic thoughts about AI and this is dangerous because they generate expectations and fears in society”.


Other images to minimize confusion

To minimize confusion, say Better Images of AI researchers, it is important to use images that honestly represent the reality of AI, such as the natural resources and materials that are consumed in its development.  For example, lithium extraction in Latin America, or water and rivers used to keep data centers running.

“We should show how AI integrates and affects our natural and social environments,” adds Sánchez, “and how its infrastructure directly impacts the ecosystem and people’s daily lives”.

“We should show how AI integrates and affects our natural and social environments, and how its infrastructure impacts the ecosystem and our daily lives”

Yadira Sánchez (Better Images of AI)

“AI stock photography disease has reached epidemic status,” said researcher Adam Geitgey, a specialist in machine learning and facial recognition.  Although AI is not the only scientific branch that suffers from these visual distortions.  Rather, all disciplines, to some degree or another, suffer when they are represented in newspapers, advertisements, public campaigns or websites of institutions.

“The media have chosen the simplest solution: using images that people recognize and associate quickly with science,” says Portuguese sociologist Ana Delicado, who studied the visual representations of science during the pandemic of the covid19 on websites of political institutions and media in Portugal and Spain.

“Stock images are always based on stereotypes. They are commercial products: their aim is to sell, not to reflect on the topics they sell. Stereotypical images in the media contribute to perpetuate stereotyped representations in public opinion,” he stresses.

During the covid-19 pandemic, it was detected that science representations continue to rely heavily on stereotypical laboratory images and white coats

In her analysis, published in the journal Frontiers in Communication, this researcher from the Institute of Social Sciences of the University of Lisbon recorded the abuse of illustrations of the coronavirus SARS-CoV-2 with its spikes and painted red, making it seem more threatening.

He also detected a large list of visual elements that allude to scientific activity: DNA helices, cells, molecules, x-rays, laboratory equipment such as microscopes, tubes, petri dishes and pipettes, often held by gloved, disembodied hands, as well as the presence of people in white coats, protective glasses and masks, but without ethnic diversity, which reflects the under-representation of minorities in the scientific community.

“The images chosen to illustrate the science of covid-19 tend to reproduce stereotypical notions of scientific research as a laboratory-centered activity,” says Delicate.


New image generators

Now, with the rise of image-generating AI systems such as Stable Diffusion, Midjourney and DALL E, distortions about AI reality and scientific activity in general are feared to worsen.  As some research has already shown, these tools that produce images from text indications often give racist and sexist results.

That is, they reproduce and amplify racial and gender biases, as they are trained with images used by media, global health organizations and internet databases.  “These images are used without criticism, without reflection“, warns the Portuguese sociologist, “and AI will continue to perpetuate stereotypes about science and technology.

Source: SINC

Rights: Creative Commons

Leave a Reply

Your email address will not be published. Required fields are marked *