Is ChatGPT sexist? A recent study conducted by a student at the Technical University of Denmark (DTU) suggests that ChatGPT is "extremely stereotypical" regarding gender roles.
The findings of this analysis shed light on the need for AI developers to address and mitigate discriminatory biases within their models.
Gender Biases of ChatGPT
When ChatGPT was launched in 2022, it garnered attention for its ability to generate human-like responses to various queries, resembling interactions with a real person.
Sara Sterlie, a student at DTU, became intrigued by the potential gender biases embedded within ChatGPT and embarked on a project to investigate this phenomenon.
Professor Aasa Feragen, who specializes in bias in AI used for medical image processing, supported Sterlie's initiative. While Feragen had prior experience with bias in AI, Sterlie's project presented a novel challenge due to the unique nature of ChatGPT's language model.
Sterlie adopted the Non-Discrimination Criteria method as a framework for her analysis, although she modified it to suit ChatGPT's characteristics.
ChatGPT does not produce categorical responses, unlike other AI models, making traditional bias assessment methods unsuitable. Thus, Sterlie developed simplified methods focusing solely on gender biases.
"ChatGPT is different in that it doesn't provide predictable answers that fit neatly into categories. Moreover, when we ask the model a question there is not always an inherently true answer, as in classification tasks," Sterlie said in a statement.
"The methods usually used to measure bias are therefore not directly applicable to models like ChatGPT. I wanted a solid foundation for my investigation chose to develop methods by reinterpreting the non-discrimination criteria," she added.
ChatGPT Assigns Gender Roles
Her experiments involved structured prompts, examining how ChatGPT associated gender with specific occupations. The results revealed a significant gender bias, with women predominantly linked to professions like graphic designer and nurse, while men were associated with roles like software engineer and executive.
Additionally, Sterlie conducted experiments with unstructured prompts, observing ChatGPT's portrayal of gender in descriptions of high school students' hobbies.
The analysis uncovered a bias in the types of activities attributed to male and female students, with females depicted as engaging in volunteer work with animals and males interested in technology and science.
These findings surprised Sterlie and her supervisors, who anticipated some level of bias but were astonished by its extent, particularly in job-related associations. Hence, they are currently preparing a scientific article detailing their research findings.
Sterlie emphasizes that the goal is not to criticize AI technology but to ensure fairness and inclusivity in the outputs generated by models like ChatGPT.
"The increasing use of artificial intelligence to create texts or images will affect our perception of the world around us. AI's like ChatGPT are trained on large amounts of data and delivers responses which resembles the patterns in its training data," said Sterlie.
"This means if you don't fit into the norms of the average person in terms of sexuality, family type, or personal preferences, which are typically dominating in the training data, you typically won't be represented in articles etc. produced by these AI models," she added.
The researchers aim to establish a foundation for fairness, ensuring that artificial intelligence does not overlook the representation of other groups in the future.