Yesterday morning, I sat in Google’s headquarters at Chelsea Pier, for its “Made on YouTube” press conference that introduced a new suite of AI tools meant to make content creation on the platform just a bit easier being released over the next year. As I watched the show and the giant ocean tankers travel across the Hudson River behind the towering windows, I had a knot in my stomach (that wasn’t just indigestion from the breakfast spread.)
AI is just a shiny, fresh repackaging of the algorithm, which has been running all of our lives since the internet basically became four different social media sites. And Google is the undisputed king of search engine optimization, basically creating the search engine blueprint to getting noticed online.
The company has already started to invest heavily in the buzzworded tech, testing out generated summaries for YouTube videos and placing algorithmic answers at the top of search feeds. Them packaging this new way to create content as an aide for creators feels like just another way to keep their content mill ever-churning.
As of yesterday, “Dream Screen” is a new feature that will create AI-generated videos based on prompts that can be used for video backgrounds. YouTube Studio is also adding AI features that will generate topic ideas, music choices, and outlines personalized to each creator’s content. To top it all off, there’s an AI dubbing feature on the way that will allow creaters to dub their videos in other languages.
AI-created videos can be made quickly, cheaply, and efficiently, and they’ve already started to flood video platforms. I can’t go more than a few video swipes on YouTube Shorts without seeing an AI-generated caption from a podcast ripped from some random account farming impressions.
And AI content isn’t always highly vetted or correct: The BBC recently found 50 channels across 20 languages that use AI to make videos on YouTube that were “spreading disinformation disguised as STEM [Science Technology Engineering Maths] content.”
Auto-generating tools have also been used for abuse and harassment, even creating deepfaked images of people against their consent. YouTube’s new tools will have some prompts blocked to keep some bad ideas at bay, but the human imagination can potentially find a workaround. When those safeguards fail, YouTube plans to turn to their Community Guidelines enforced by “machine learning detection and human evaluators,” according to YouTube CEO Neal Mohan at the Q&A portion of the event.
“Whether it’s generated through AI tools or video uploaded to YouTube, it is subject to our same Community Guidelines,” Mohan said. “Where we know there might be challenges with technology like this, the rules of the road apply.”
But the Community Guidelines don’t always help creators, and a lot can get lost in the 500 hours of content uploaded to the platform every minute. Even if problematic videos are taken down, it seems like it’s going to be easy enough to just find a way to upload it again.
I fear that YouTube’s wholehearted embrace of AI and its associated content could leave creators and fans vulnerable to problematic or incorrect content. These tools might be packaged as a way to make pandas drink coffee over a dubstep beat, but there needs to be proper regulation or control outside of the Terms of Service. And I’m not sure there will be.