More and more people are reportedly using artificial intelligence to produce hate content, worrying experts of a problematic trend thanks to the booming technology. A trend that a previous UN advisory body warned could supercharge various propaganda.
Reporter Peter Smith of the Canadian Anti-Hate Network asserts that an increasing amount of AI-generated content is being observed by researchers studying hate content and hate media.
Consequently, according to Chris Tenove, assistant director of the University of British Columbia's Centre for the Study of Democratic Institutions, hate groups and white supremacist organizations have a history of being among the first to adopt new Internet tools and methods.
A UN advisory council raised this issue in December. It expressed extreme concern over the prospect that generative AI could amplify Islamophobic, antisemitic, xenophobic, and racist discourse. That content can occasionally spill over into actual life.
Smith claimed that after AI was used to create "extremely racist Pixar-style movie posters," some people printed the signs and put them up on the exterior of theaters.
With only a little instruction, generative AI systems may produce pictures and films nearly instantaneously. According to Smith, with only a few keystrokes, one person can create dozens of images in the same amount of time as someone who would need to dedicate hours to creating just one.
AI Hate Content Surges
B'nai Brith Canada brought up the problem of AI-generated hate content in a recent antisemitic report. According to the report, there was an unparalleled surge in antisemitic pictures and films produced, altered, and fabricated using AI in the past year.
According to Richard Robertson, director of research and advocacy, the organization has noticed that AI is being used to create incredibly vivid and horrifying visuals, most of which deal with Holocaust denial, diminishment, or distortion.
The warning comes as artificial intelligence deepfakes continue to be as problematic as they could be, with terrorist groups now also using the booming technology to spread propaganda.
AI-Powered Terrorist Propaganda
The IS offshoot group Islamic State Khorasan (ISKP), which operates in Afghanistan and Pakistan, released a video in which an AI anchorman appeared to read the news after an attack in Bamiyan province, Afghanistan, on May 17. The attack claimed the lives of four people, including three Spanish tourists.
According to the website Khorasan Diary, which focuses on news and analysis of the region, the artificial avatar serving as an anchor reportedly spoke Pashto and had features resembling those of Bamiyan natives.
A separate male digital news anchor asserted in an AI-produced propaganda video that the Islamic State was to blame for a car explosion in Kandahar, Afghanistan.
Four days after the March 22 attack on a Moscow music hall that left about 145 people dead, IS started using AI-generated news bulletins. IS took ownership of the event. IS presented the Moscow attack in the video using a "fake" AI-generated news anchor.
Furthermore, IS proponents have been using text-to-speech AI technology and character-generation techniques to interpret news bulletins from IS's Amaq news agency, according to reported research at the International Center for the Study of Violent Extremism.
Related Article : New Policy for Disclaimers on AI Political Ads Eyed by FCC