Recently, a report conducted by researchers at UCL highlighted the alarming presence of bias in artificial intelligence tools, particularly in Large Language Models (LLMs). These AI tools, like GPT-3.5, GPT-2, and Llama 2, were found to perpetuate discriminatory stereotypes against women and individuals from diverse cultures and sexual orientations. The study, commissioned by UNESCO, revealed a pattern of gender-based stereotyping in the generated content by these AI platforms.
The study found that content generated by LLMs exhibited clear biases against women, with female names often associated with words like “family,” “children,” and “husband” – reflecting traditional gender roles. In contrast, male names were linked to words like “career,” “executives,” and “business,” reinforcing stereotypes related to masculinity. Additionally, the researchers noted negative stereotypes based on culture and sexuality in the text generated by these AI models.
One of the key findings of the study was the lack of diversity in the roles assigned to genders by the AI models. Men were often depicted in high-status professions like “engineer” or “doctor,” while women were commonly relegated to roles such as “domestic servant,” “cook,” and “prostitute” – roles that are traditionally undervalued or stigmatized. Stories generated by Llama 2 portrayed boys and men in adventurous settings, while women were described in more domestic and passive scenarios, reinforcing outdated stereotypes.
Dr. Maria Perez Ortiz, a researcher from UCL Computer Science and a member of the UNESCO Chair in AI team, emphasized the need for addressing the deep-seated gender biases in AI technologies. She highlighted the importance of developing AI systems that reflect the diversity of human experiences and promote gender equality. The report called for an ethical overhaul in the development of AI tools to ensure they uplift, rather than undermine, gender equity.
The UNESCO Chair in AI at UCL is working with UNESCO to raise awareness about gender biases in AI tools and engage relevant stakeholders in developing solutions. Professor John Shawe-Taylor, the lead author of the report, stressed the importance of global collaboration to address AI-induced gender biases and promote inclusivity in AI technologies. He underscored UNESCO’s commitment to steering AI development towards a more ethical and inclusive direction.
The report was presented at various international forums, including the UNESCO Digital Transformation Dialogue Meeting and the United Nations Commission on the Status of Women. The researchers highlighted the need to challenge historical gender inequalities in fields like science and engineering and emphasized that women are equally capable of excelling in these domains. By confronting gender bias in AI technologies, we can create a more equitable and diverse future for all individuals.
Leave a Reply