YouTubers Now Required to Label Realistic AI-Generated Videos

A move to prevent misleading and confusing users.

YouTube is now officially requiring its content creators to label whether or not their videos contain AI-generated material that is too realistic to be distinguished as AI-generated.

The announcement comes a week after its parent company, Google, hinted at AI content disclosure when the tech giant also released various AI safeguards that aim to prevent election disinformation.

Content creators will reportedly be presented with a checklist when they submit a video to the website. It asks them if their work alters footage of a real place or event, makes a real person say or do something they did not do, or presents a realistic-looking scene that did not happen.

(Photo : Chris McGrath/Getty Images)
A new study conducted by the Computational Social Science Lab (CSSLab) at the University of Pennsylvania sheds light on a pressing question: Does YouTube's algorithm radicalize young Americans?

YouTube creators will have to note when their videos use AI-generated or other modified information that seems realistic. If they do not include the notice regularly, they may have to deal with the consequences.

The business made it clear, though, that they will not force creators to reveal if they employed generative AI for productivity-such as producing screenplays, ideas for content, or automatic captions-because they understand that creators use it in a variety of ways throughout the creation process.

Additionally, the corporation will not mandate that artists reveal information about synthetic material that is unrealistic or has little alterations, such as special effects, filters, or manifestly unrealistic content.

According to CNN, the new policy will be implemented in the fall as part of a wider deployment of new AI regulations.

Read Also: TikTok Uncertainty: Advertisers Eyeing Meta's Reels and YouTube's Shorts If US Senate Approves Ban

AI Content Disclosure

Sources indicate that the new projected policy comes amidst the emergence of new consumer-facing generative AI tools that make it quick and simple to create captivating text, images, video, and music that are frequently difficult to discern from the real thing, the disclosure is intended to assist prevent consumers from becoming confused by synthetic content.

Experts in online safety have expressed concern that the spread of AI-generated content may mislead and confuse consumers online, particularly in the run-up to elections in 2024 in the US and other countries.

The label will be added more prominently on the video screen for videos on "sensitive" topics like politics. When a YouTube creator reports that their video contains AI-generated content, YouTube will add a label in the description noting that it contains "altered or synthetic content" and that the "sound or visuals were significantly edited or digitally generated."

As stated by the firm last year, content produced using YouTube's own generative AI tools launched in September will also have clear labeling.

Gemini Election Restrictions

The new policy was hinted at last week when Google also announced it would bar its AI chatbot, Gemini, from answering certain election-related questions. Even in a broad sense, the chatbot's response to a question about the US election would reportedly be, "I'm still learning how to answer this question. In the meantime, try Google Search."

Though the company noted in its statement that not all election queries are subject to this restriction, it may appear that Google is making Gemini AI apolitical at this moment. Certain user queries will be handled by Gemini, but Google has not specified which queries are acceptable for the AI chatbot to answer rather than sending consumers to Google Search.

Related Article: YouTube Picture-in-Picture Expanding to More Free Users, But Only in the US

(Photo: Tech Times)
ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics