Study: Sexist Artificial Search Engine Intelligence
A first-time study by researchers at the Johannes Kepler University of Linz has precisely measured search AIs bias.
Does A.I. know us better than we know ourselves? Modern algorithms are increasingly assessing people, for example, at the employment office. The problem is that artificial intelligence also has to access information provided by people - and is often burdened with bias. For example, it became known that a Microsoft AI denied the Holocaust.
Navid Rekab-Saz and Markus Schedl (Institute for Computational Perception, JKU) have explored how strongly bias affect search engine results.
Rekab-Saz remarked: "In part, we want gender-specific results." If you search Google for rulers of the Roman Empire, the results are naturally male in nature as back then, emperors were men. Problems arise when entering the term "Nurse", for example. The word in English word includes both male and female nurses. The resulting images, however, men are very much in the background. "The term ‘nurse’, although gender-neutral, is used in a very feminine way." said the JKU researcher. It’s the other way around for the term ‘CEO’ in which there are mostly images of men.
Many AIs use deep learning and the study’s authors found that it further enhances this effect. Modern search engines no longer search just for the word itself (i.e. nurse or CEO), but also for similar words, subject areas and terms. This means, for example, the term ‘midwives’ is included in the search for "nurse" and therefore increasingly leans more to the female interpretation. "The man-made data already contains the tendency. The AI's targeted search will then amplify the effect." And quite significantly so, as the JKU study proves for the first time. This effect was explored regarding gender but it also occurs in other areas such as age, race, and religion.
The JKU researcher added: "That's no reason to reject AI; on the contrary, artificial intelligence is an enormously valuable tool. While deep learning has doubled the quality of search results, there are also problems we have to be aware of."
As a result, Rekab-Saz supports two steps: AI result distortions by human bias must be taken into consideration when programming the algorithms. And when looking at the results of artificial intelligence, we should do so with reservation and a healthy dose of human intelligence.