Artificial intelligence-generated content meant to solely benefit from content monetization has officially earned a new name, coined by tech experts as "Slop," as reported by the Guardian.

Like internet-rooted words like spam and troll, "slop" hopes to give the new internet content a new name that users can be aware of and even avoid.

In contrast to a chatbot, the slop is not interactive and is rarely meant to genuinely address readers' needs or provide answers to their queries. Its primary purposes are to imitate human-generated content, make money from advertising, and direct search engine traffic to other websites.

SPAIN-WIRELESS-TELECOMS-INTERNET-MOBILE
(Photo: JOSEP LAGO/AFP via Getty Images) An AI (artificial intelligence) logo is pictured at the Mobile World Congress (MWC), the telecom industry's biggest annual gathering, in Barcelona on February 27, 2024.

Like spam, nobody wants to watch slop, yet the internet's economics nonetheless forces its production. AI models make it simple to automatically produce enormous amounts of text or photos, answer every type of search query, upload countless sharing landscapes and motivational tales, and generate many encouraging comments.

Its creation cost is recouped if only a few people visit the website, reshare the meme, or click on the hosted advertisements.

One of the first people to coin the term "slop," developer Simon Willison, hopes that the new phrase will catch on and serve as a clear denotation that unsolicited commercial messages are undesirable behavior, much like spam.

When slop is simply plain incorrect, it is most obviously destructive. As a prime illustration of the issue, Willison cited an AI-generated Microsoft Travel article that included the "Ottawa food bank" as a must-see sight in the nation's capital.

Read Also: TikTok to Start Labelling AI-Generated Content Amid Misinformation, Transparency Concerns 

Google's AI Watermarks

Notable companies continue to label AI content on their platforms. Google, for example, recently disclosed at its I/O developer conference that it is extending its AI content watermarking and detection technology, SynthID, to photos, text, and video.  

Google emphasized that openness and information trust depend on recognizing AI-generated content. The company points out that SynthID offers a set of promising technical answers to this urgent AI safety issue, even though it isn't a panacea for misinformation or misattribution.

When SynthID was first introduced in August of last year, it imprinted AI-generated images in a way only the system could perceive. In contrast, C2PA enhances AI-generated content with encrypted metadata.

Moreover, Google has permitted SynthID to overdub DeepMind's Lyria-generated music with inaudible watermarks. In line with the Biden administration's suggestions, SynthID is a component of a broader initiative to safeguard artificial intelligence.

YouTube on AI Content

YouTube, as of late March, formally mandated that its content providers indicate whether or not their films include AI-generated content that is too lifelike to be identified as such.

When content providers upload a video to the platform, they are given a checklist. Inquiries are made as to whether their work depicts a situation that looks realistic but did not occur, manipulates footage of a real place or event, or has an actual individual say or do something they did not do. 

Related Article: TikTok Conducts Early Tests of ChatGPT-Assisted Search Results 

Written by Aldohn Domingo

(Photo: Tech Times)

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion