YouTube is taking steps to strive to assist viewers in better comprehending whether the content they are watching has been created, either completely or in part, by generative AI.
“Generative AI is revolutionizing the ways creators express themselves – from conceptualizing storyboards to experimenting with tools that enhance the creative process,” YouTube stated in a message shared on Monday. “However, viewers are increasingly desiring more transparency regarding whether the content they are seeing has been altered or is synthetic in nature.”
Recommended Videos
Part of the effort to enhance trust on this popular streaming platform involves the introduction of a new tool that requires creators to inform viewers when content that appears realistic (defined by YouTube as “content that a viewer could easily mistake for a real person, place, or event”) is crafted using altered or synthetic media, including generative AI.
YouTube provided examples of content that creators should mark as having been altered:
- Utilizing the likeness of a realistic person: Digitally modifying content to replace the face of one individual with that of another or synthetically generating a person’s voice to narrate a video
- Altering footage of real events or places: Such as making it seem as if a real building has caught fire or altering a real cityscape to make it appear different from reality
- Generating realistic scenes: Displaying a realistic portrayal of fictional major events, like a tornado approaching a real town.
If a creator marks the content in this manner, the disclosure will appear as a label. YouTube stated that for most videos, the label will appear in the video’s expanded description, but for content involving more sensitive topics – such as health, news, elections, or finance – it will appear on the video itself to enhance its prominence.
YouTube further added that it is not mandating creators to disclose content that is clearly unrealistic, animated, includes special effects, or has utilized generative AI for production assistance.
Additionally, the Google-owned company has specified that creators are not required to highlight every instance where generative AI has been employed in the broader production process. For instance, there is no need to disclose when the technology has been used to create scripts, content ideas, or automatic captions.
The new labels will be rolled out across all YouTube platforms in the coming weeks.
And a warning for YouTube creators who attempt to bypass the requirement to highlight altered content voluntarily – YouTube said that in the future, it will “examine enforcement measures for creators who consistently choose not to disclose this information.” It added that it may even affix a label to content in cases where a creator has failed to do so, “especially if the altered or synthetic content has the potential to confuse or mislead people.”