- Artificial intelligenceTechnology
- 30 de April de 2024
- No Comment
- 6 minutes read
Humans can inherit artificial intelligence biases
Humans can inherit artificial intelligence biases
Recommendations made by AI systems can adversely impact human decisions
It is known that algorithms often reproduce the mistakes and prejudices of their creators, but the opposite can also occur: a biased AI can influence human decision-making even when the AI is no longer present. This is revealed by experiments carried out at the University of Deusto.
A study conducted by psychologists Lucía Vicente and Helena Matute from the University of Deusto, in Bilbao, shows that we humans can inherit biases from artificial intelligence (AI). The study is published in the open access journal Scientific Reports. The impressive results achieved by these new technologies reflect their high reliability and have become very popular. However, bias in an algorithm is defined as a systematic error.
These programs are trained with data that are the product of human decisions. If the data contain biases, artificial intelligence models trained with these data will learn and reproduce those biases. In fact, there is already sufficient evidence that AI indeed inherits and amplifies human biases.
This situation was known, but now the new study shows that the opposite effect can also occur: we humans could inherit biases from AI, which means there is a risk of entering a rather dangerous loop.
For their research, the authors set up a series of experiments where volunteers had to perform a fictitious medical diagnosis task using a matrix with two colours, dark and light, which represented human tissue samples obtained from patients affected by a fictitious syndrome.
Positive or negative to the disease
Thus, they created a classification criterion, so that a greater proportion of dark cells meant that the tissue sample was affected by the disease. It was described as positive whereas if it had a greater proportion of light cells, then it should be classified as negative. Participants had to choose between these two options.
Half of the volunteers constituted a control group that did not have any AI assistance. On the other hand, the other half performed the task with the recommendations of an artificial intelligence that showed a systematic error. Those for the tissue samples with the dark/light cells ratio of 40/60 were always wrong.
In these samples, there was a contradiction between the classification criterion and the recommendation given by the AI indicating that it was positive when, attending to the proportions of the two colours, it would be classified as negative.
“The participants reproduced this systematic error of the AI, although it was very simple to discriminate the images, as demonstrated by the control group, which was unassisted from this biased artificial intelligence and hardly made mistakes in the task,” explain to SINC Lucía Vicente and Helena Matute.
Same error without the AI
In addition, the most relevant result was that, in a second phase in which the AI was no longer present, the volunteers continued to make the same systematic error when they went on to perform the diagnostic task without its assistance.
These people reproduced the bias in a context without the support of AI, thus evidencing an inherited bias. This did not occur in the participants of the control group, who had performed the task without help from the beginning.
The authors conclude: “We have found that it is possible for people advised by a biased AI to end up adapting their decisions to the results of this artificial agent, and what’s more, that this influence can persist even when people move to an unassisted context”.
We see that it is possible for people advised by a biased AI to end up adapting their decisions to the results of this artificial agent, and what’s more, that this influence can persist even when people move to an unassisted context
Lucía Vicente and Helena Matute (Un. de Deusto)
These results show, therefore, that biased recommendations made by AI systems can adversely impact human decisions in the long term.
For the authors, this finding indicates the necessity for more psychological and multidisciplinary research on the interaction between artificial intelligence and humans.
They also consider necessary a regulation based on evidence that guarantees an ethical and reliable AI, which takes into account not only the technical aspects, but also the psychological ones of the relationship between this technology and humans.
Reference:
Vicente, L, & Matute, H. “Humans inherit artificial intelligence biases”. Scientific Reports, 2023