AI-generated Images of White Faces Are Becoming More Real than Humans - Study

The study raises concerns about how hyper-realistic white AI faces could spark racial biases online.

Artificial intelligence (AI) has reached a point where generated images of faces depicting white people appear more authentic than real human faces, as revealed by recent research conducted by experts at The Australian National University (ANU).

The study demonstrated that a higher number of participants mistook AI-generated white faces for real ones compared to real human faces. However, this pattern did not hold for images of people of color, indicating a significant discrepancy.

Ai Generated Blonde Woman
Vicki Hamilton from Pixabay

Disparity in AI-generated Images

Dr. Amy Dawel, the paper's senior author, highlighted that the root cause of this disparity lies in the disproportionate training of AI algorithms on white faces.

She expressed concern about the potential consequences, emphasizing that if white AI faces consistently appear more realistic, it could reinforce racial biases online, particularly impacting people of color.

The issue extends to the use of AI technologies in creating professional headshots, where algorithms designed for white individuals may alter the appearance of people of color, adjusting their skin and eye colors to those of white people.

The study also uncovered a significant challenge associated with AI "hyper-realism" - individuals often fail to recognize when AI-generated images are deceiving them.

Elizabeth Miller, a study co-author and PhD candidate at ANU, noted that those who believed AI faces were real tended to be paradoxically more confident in their judgments, indicating a lack of awareness when being misled.

The researchers delved into the reasons behind this phenomenon, discovering that there are still physical differences between AI and human faces. However, people tend to misinterpret these differences.

For instance, white AI faces may exhibit more proportionality, leading individuals to mistake this as a sign of humanness. Dr. Dawel cautioned that these physical cues may not remain reliable for long as AI technology is evolving rapidly, potentially erasing the distinctions between AI and human faces.

The researchers emphasized the potential repercussions of this trend, including the increased risk of misinformation and identity theft. They called for greater transparency around AI development, advocating for a broader understanding beyond tech companies to identify and address potential issues before they escalate.

Call for Public Awareness

Dr. Dawel highlighted the importance of public awareness in mitigating the risks associated with AI technology. Educating individuals about the perceived realism of AI-generated faces could foster appropriate skepticism and critical evaluation of images encountered online.

The researchers stressed the need for tools to accurately identify AI imposters as a crucial step in navigating the evolving landscape of AI-generated content.

"AI technology can't become sectioned off so only tech companies know what's going on behind the scenes. There needs to be greater transparency around AI so researchers and civil society can identify issues before they become a major problem," Dr Dawel said in a statement.

The findings of the research team were published in the journal Psychological Science.

Byline
Tech Times
ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics