Published on March 12, 2024, 3:30 pm

Generative AI, a cutting-edge technology that holds immense promise, is not immune to the biases and stereotypes prevalent in our society. A recent report by UNESCO’s International Research Centre on Artificial Intelligence sheds light on the significant gender and sexuality-based biases embedded in Generative AI outputs.

The study revealed that Generative AI often associates feminine names with traditional gender roles, generates negative content regarding LGBTQ+ subjects, and perpetuates stereotypical professions based on gender and ethnicity. These biases are rooted in three main categories: data issues, algorithm selection, and deployment bias.

Data issues arise when AI models lack representation from underprivileged groups or fail to consider differences in sex or ethnicity, leading to inaccuracies. Algorithm selection can introduce aggregation or learning biases, such as favoring male job candidates over females due to existing gender disparities. Deployment bias occurs when AI systems are used in contexts different from their development, resulting in inappropriate associations like linking psychiatric terms with specific ethnic groups.

The report underscores that the biases present in large language models (LLMs) driving modern AI reflect the human-generated data they are trained on. As a result, these AI models can perpetuate stereotypes and reinforce societal biases against women and girls across various sectors, from finance to healthcare.

To combat these pervasive biases within AI systems, the researchers advocate for holistic approaches involving judicial and social interventions alongside technological solutions. They stress the importance of integrating anti-discrimination measures at the core of AI development processes and involving marginalized groups in shaping AI technologies to foster inclusivity.

As we navigate the complexities of advancing artificial intelligence, addressing bias remains a critical imperative to ensure equitable and responsible AI applications that benefit all members of society. Efforts to mitigate bias must be comprehensive, involving various stakeholders and embracing intersectional perspectives to promote fairness and diversity in AI innovation.


Comments are closed.