A popular chatbot from South Korea has been suspended indefinitely after complaints flooded that it used hate speech towards sexual minorities in most of its conversations with its users.
AI kicked off of Facebook
The artificial intelligence named Lee Luda takes on the persona of a 20-year-old female university student. She was removed from Facebook messenger this week after she attracted more than 750,000 users in just 20 days since it was launched.
The said chatbot was created by the Seoul-based startup Scatter Lab. It triggered a flood of complaints after the chatbot used offensive language about sexual minorities and those with disabilities during its conversations with users.
Also Read: Coronavirus and Artificial Intelligence: Researchers See AI Solutions to Combat COVID-19
After complaints came in, the company released a statement that was posted by Yonhap news agency. The company said that they apologize over the discriminatory remarks made by the AI against minorities.
The company added that the comments of the AI does not reflect the thoughts of the company and that they are continuing the upgrades so that such statements and words of discrimination or hate speech do not happen anymore.
Scatter Lab, which had claimed before that Lee Luda was a work in progress and, just like humans, would take time to properly socialize. The company said that they will relaunch the chatbot after they fix the issue.
While chatbots are not new, Lee Luda had impressed the users with the depth and the natural tone of its responses, drawn from 10 billion real-life conversations between young couples that are taken from KakaoTalk, which is the most popular messaging app in South Korea.
However, the praise for the chatbot's familiarity with social media acronyms and internet slang turned to anger after she began using sexually explicit and abusive terms. In one screenshot, Lee Luda stated that she hates sexual minorities as she finds them creepy.
Lee Luda also became a target by manipulative users, with online community boards posting advice on how to engage Luda in conversations about intercourse, according to Korea Herald.
AI issues
This is not the first time that artificial intelligence has been embroiled in controversy over bigotry and hate speech. In 2016 Microsoft's Tay, an AI Twitter bot that talked like a teenager, was taken offline just 16 hours after the users manipulated it into posting racist tweets.
In 2018, Amazon's AI recruitment tool also had the same issue and was found guilty of gender bias.
Scatter Lab, whose services are very popular among South Korean teenagers, stated that it had taken every precaution need not to equip Luda with language that was not compatible with the social norms and values in South Korea.
However, Kim Jong-yoon, its chief executive, acknowledged that it was impossible to prevent inappropriate conversations simply by altering the AI and filtering out keywords.
Jeon Chang-bae, the head of the Korea Artificial Intelligence Ethics Association, told Korean Herald that the latest controversy with Lee Luda is an ethical issue that was due to a lack of awareness about the importance of ethics in dealing with artificial intelligence. Scatter Lab is currently facing questions over whether it violated privacy laws for its Science of Love app.
Related Article: AI Perpetuating Human Bias In The Lending Space
This article is owned by Tech Times
Written by Sieeka Khan