As misinformation and deep fakes run wild, YouTube has announced a way for creators to self-label content that is AI-generated.
Specifically, creators will be expected to put YouTube AI labels on “realistic content.” The company defines this as “content a viewer could easily mistake for a real person, place, or event” that is made with “altered or synthetic” media.
In its announcement, the company elaborated on what kind of synthetic content requires YouTube AI labels. It includes content using the likeness of a real person, altered footage of real events or places, and realistic scenes.
“The new label is meant to strengthen transparency with viewers and build trust between creators and their audience,” YouTube explained in a blog post.
But just because AI-generated content needs to be disclosed, that doesn’t mean creators can’t use it in their creative process at all.
As YouTube points out, creators who use AI for “productivity” won’t need to disclose anything with labels. In other words, those making videos using AI-generated scripts or brainstorming.
This may well prove controversial, especially if a creator is claiming AI-generated ideas and scripts as their own.
The same goes for things like synthetic beauty filters. Presenting an altered, filtered version of yourself isn’t exactly prohibited by YouTube. However, it raises ethical questions about unrealistic beauty standards and an impressionable young audience.
But as the election draws closer, YouTube seems to be trying to keep things as simple and misinformation-free as possible. Creators won’t be required to put AI labels on “clearly unrealistic content” like animation. But YouTube is making it clear that its tolerance for hyperrealistic AI-generated content is low.
What are your thoughts on YouTube AI labels? Email tips@passionfru.it to share your story.