YouTube’s New Guidelines for Videos Using AI Are a Step in the Right Direction

youtube ai new guidelines
Alexander Limbach/Shutterstock monticello/Shutterstock estar 2020/Shutterstock (Licensed) Remix by Caterina Cox

If you’ve been reading the internet this past year, you know that AI is changing the internet. As a result, tech platforms have been scrambling to figure out what to do about AI-generated content. 

Yesterday, YouTube announced that it will require its users to disclose when created content related to artificial intelligence (AI) is used in one of their videos. In a news release, the social media platform introduced a set of new policies aimed at being forthcoming about AI utilization.

Essentially, the new policies include requiring creators to disclose when they’ve created — or incorporated — AI content for their posts, such as making AI content “that realistically depicts an event that never happened, or content showing someone saying or doing something they didn’t actually do.” 

Those who don’t, according to the news release, “may be subject to content removal, suspension from the YouTube Partner Program, or other penalties.” YouTube added, “We’ll work with creators before this rolls out to make sure they understand these new requirements.”

In addition, viewers will also be allowed to submit a request form for YouTube to remove AI content “that simulates an identifiable individual, including their face or voice.” The company noted that not all requests will be honored, and it will “consider a variety of factors when evaluating these requests.” 

“This could include whether the content is parody or satire, whether the person making the request can be uniquely identified, or whether it features a public official or well-known individual, in which case there may be a higher bar,” the release said. Or potentially, if a creator uses their own likeness in a video. 

Interestingly, it was just last week that I wrote about whether AI-generated digital clones could help creators mitigate symptoms of burnout. Short answer: They might help in the near term but are likely to make life more difficult for creators in the future. If this latest announcement from YouTube is any indication, that future is already upon us. YouTube says the penalties for not labeling AI-generated content accurately will vary but could include demonetization. 

Still, it remains unclear how YouTube will know if an unlabeled video was actually generated by AI. As YouTube spokesperson Jack Malon told The Verge, the process is likely going to be complicated since “there’s no definition of ‘parody and satire’ for deepfake videos yet.” 

Not to mention, as NBC News Tech and Culture reporter Kat Tenbarge tweeted, “YouTube reserves the right to decide whether unauthorized AI content of you or anyone else should be taken down or stay up. We don’t have final say over our likenesses.”

AI technology capable of creating deepfakes has exponentially improved in recent years and is more accessible every day. There’s a whole slew of apps that can help people manipulate or create realistic videos of recognizable people saying and doing things they didn’t. And let’s not forget one of the most common uses of this technology remains depicting women in nonconsensual pornographic videos.

In fact, there isn’t even an established legal framework for copyright law or name, image, and likeness (NIL) protections in the generative AI era. That said, there is growing interest in the regulation of AI from the President to the Supreme Court. And for YouTube, this is a step in the right direction.

Do you have a clue into our next big story? Email to let us know.

Content for Creators.

News, tips, and tricks delivered to your inbox twice a week.

Newsletter Signup

Top Stories