Beyond Bias: Ensuring Fairness in Generative Artificial Intelligence

AI Shows Bias Against Women Extends Beyond Technology Issues

In recent years, it has been discovered that generative artificial intelligence models exhibit biases in their responses. This bias can manifest in a variety of ways, often leaning in a particular direction. UNESCO raised concerns earlier this year about models like Llama 2 and GPT-2, which displayed bias against women. However, Wipro’s Global Chief Privacy and AI Governance Officer, Ivana Bartoletti, emphasizes that the issue arises when this bias translates into discrimination rather than simply bias itself.

Baidu’s response to ChatGPT, Ernie Bot, revealed gender stereotyping in its portrayal of roles. For example, it depicted a nurse as a woman with a ponytail and stethoscope while a university professor was visualized as an elderly man. These examples underscore the prevalence of biases in generative AI that reflect societal norms and stereotypes.

The origin of these biases lies in the data that these AI models are trained on, which is created by humans. As Bartoletti puts it, “Garbage in, garbage out.” Careful selection and curation of data sets are essential to prevent biases and inaccuracies in the responses generated by genAI models. In India, obtaining accurate and unbiased data has proven to be a challenge for both the private and public sectors, highlighting the importance of data quality in AI development.

The issue of bias in genAI models stems from the automation of human biases, resulting in unfair outcomes such as gender-based disparities in job opportunities and financial privileges. Bartoletti points out that these biases are a reflection of historical inequalities such as pay gaps between men and women. To address these issues

Leave a Reply