Since its rapid rise, AI has continuously raised concerns about its potential cultural biases, particularly when OpenAI's ChatGPT became popular in 2022. But does AI have tendencies to be culturally biased? According to an expert, yes.
Dr. Kevin Wong, an AI specialist from the School of Information Technology at Murdoch University, points out that a fundamental understanding of AI techniques is crucial in addressing these issues.
AI's Cultural Bias
Dr. Wong explains that machine learning techniques, particularly Generative AI, heavily rely on vast amounts of representative data for training. However, bias can creep in when the data is incomplete or exhibits an imbalanced distribution.
According to Wong, despite efforts by major tech companies to address equity, diversity, and ethical considerations in AI training data, challenges persist due to the unpredictability of AI behavior without proper management.
The expert further notes that publicly accessible AI systems have particularly faced scrutiny for their failure to generate images of interracial couples, reflecting broader biases within these systems.
To tackle these challenges, Dr. Wong emphasizes the need for a comprehensive evaluation and testing strategy.
He advocates for incorporating other AI techniques, such as Explainable AI and Interpretable AI, which promise greater human oversight and predictability in decision-making.
Responsible AI, characterized by principles guiding AI development, emerges as another critical area for improving AI systems.
Dr. Wong underscores the complexity of addressing cultural, diversity, equity, privacy, and ethical concerns within AI, suggesting that multi-dimensional and hierarchical approaches may be necessary.
While acknowledging existing issues with diversity in AI, Dr. Wong highlights AI's potential to bridge equity and diversity gaps when deployed responsibly.
He stresses the importance of developing AI systems with ethical considerations that are adaptable to various cultures and individual needs.
However, he emphasizes that rigorous testing and evaluation are imperative to prevent adverse outcomes that may evoke sensitive emotions in different populations.
"It is important for a general system to be developed following some rules and ethical considerations that can then be adapted to different cultures and personal needs," Wong said in a statement.
"However, thorough testing and evaluation are essential before using widely, as some outcomes could cause sensitive and fragile emotions in some populations around the world."
Read also: Anthropic Unveils Claude 3: The 'Rolls-Royce of AI Models' Outshining GPT-4 and Gemini 1.0 Ultra
US States Aim to Mitigate AI Bias
In related news, lawmakers in seven U.S. states are introducing legislation to mitigate AI bias, initiating discussions on balancing AI's benefits and risks.
The success of legislative efforts depends on handling complex issues within a rapidly evolving AI industry valued in the hundreds of billions.
Despite the introduction of around 200 AI-related bills last year, only a fraction were enacted into law. These bills primarily focused on specific aspects like deepfakes and chatbots instead of broader concerns such as AI bias.
Currently, seven state bills seek to regulate AI bias across industries, highlighting the need for proactive measures to address these issues.
Experts emphasize that states must accelerate efforts to establish comprehensive regulatory frameworks safeguarding against AI bias and promoting equitable and responsible AI development.
Read more about this story here.