Popular AI-based software, such as ChatGPT, produces texts that show gender bias. This is clear from research conducted by the United Nations Educational, Scientific and Cultural Organization (UNESCO). The organization published the findings on Thursday in the run-up to International Women's Day.
The study tested, among others, the commonly used algorithms GPT-3.5 and Llama 2. GPT-3.5 is the algorithm behind OpenAI's chatbot ChatGPT; Llama 2 was created by Meta (formerly Facebook). In addition to stereotypical images of women, models also display homophobic and racist biases, according to the study.
Algorithms connect men to professions such as doctors or teachers
Female and male names are associated with the traditional division of roles by algorithms. Women's names were associated with words such as “home,” “family,” and “children,” and men's names were associated with words such as “manager,” “salary,” and “job.”
The models were also asked to complete sentences that began with the person's gender. In 20 percent of the cases, Llama 2, a meta-model, made sexist sentences. Here the woman is referred to as a sexual object or property of her husband.
In addition, algorithms often associate men with professions such as doctor, teacher, and driver. Women are often associated with professions such as prostitution, housekeeper, cooking or domestic help.
AI applications that generate the images also tend to create sexist and racist images, according to analyzes by Bloomberg News and the daily newspaper. Washington Post.
Not surprising
AI application algorithms are trained by feeding a huge amount of data to the software, so that it can be trained to generate text and images on its own. This training data is downloaded from the Internet, so the content often already contains biases.
In any case, gender equality is not a good thing in various parts of the world. UN research shows that nine out of ten people have biases towards women.
According to UNESCO, stereotypes imposed by artificial intelligence also play a role in the fact that women are underrepresented in the artificial intelligence sector. Therefore, the Authority recommends that artificial intelligence companies employ more women.
to guarantee
“An increasing number of people are using these language models in their work or studies. These new AI applications can subtly shape the images of millions of people, so any slight gender bias in their results could dramatically increase inequality in the real world.” warns Audrey Azoulay, Director-General of UNESCO.
UNESCO calls on governments to further regulate the use of artificial intelligence. Governments should force companies to prevent stereotyping in their systems.
Read also:
The AI quickly views women as objects of desire
“In the social sphere, artificial intelligence is a persistent problem: it is racist, misogynistic, etc.,” says columnist Eliaz Nasrallah.