On Monday, YouTube announced – in a little-seen blog post on its Support/Help forums – that the platform wants “to give creators more choice over how third parties might use their YouTube content.” So they’ve introduced a new setting called “Third Party Training,” with which creators can “choose to allow” AI companies to train new models on their content. YouTube describes this new setting as “an important first step in supporting creators and helping them realize new value for their YouTube content in the AI era.”
TechCrunch published a list of the “third parties” that YouTube creators can specifically allow to train AI on their creations. It includes most of the usual suspects who have dropped billions into AI development, including Apple, Anthropic, ByteDance, Nvidia, IBM, Meta, OpenAI, and Microsoft.
This whole announcement represents a strange and almost shockingly misleading rhetorical maneuver. It’s clearly designed to obfuscate the major issues at play: specifically, the fact that YouTube videos have already been used, en masse, by these same companies, to train AI models with no prior authorization. Back in April, The New York Times and Wall Street Journal reported that thousands of YouTube subtitle files were included in The Pile, a dataset that was used to train AI models by Apple, Anthropic, Nvidia, and others. Google itself was found to be using these YouTube video transcripts to train its own AI models, even though taking content from YouTube and using it without authorization violates their own Terms of Service.
(YouTube’s not alone in this, among video streaming platforms. OpenAI’s recently launched text-to-video generator Sora was almost certainly trained on Twitch gaming streams and walkthroughs.)
So the new Third Party Training setting allows YouTubers to opt in, giving companies permission to use their content for AI training. But this is a meaningless distinction, as there is no ability to opt-out. For many creators, the damage has already been done; they’re literally authorizing companies to do something the companies have already been doing. TechCrunch notes that “YouTube was unable to say if the new setting could have any sort of retroactive impact on any third-party AI model training that has taken place,” reinforcing what a meaningless gesture this new authorization setting represents.
Particularly galling is YouTube’s suggestion that its goal is to help creators “realize new value” for their content. Creators are not being offered anything in return for their license to these companies, setting that new value at approximately $0.00. Should a creator opt into allowing, say, Apple to use their videos for AI training, they’re doing so explicitly without any kind of compensation, setting precisely the wrong precedent. Giving your content away for free is hardly a first step in realizing new value for your content. Quite the opposite, in fact. In order for a commodity to have value, it must remain inaccessible for free. This is pretty basic economics.
Hype is one thing. Convincing creators that their work days are going to be more efficient, and their channels more successful, after integrating AI into their processes is fair game, and may even be a reasonable argument to make for some kinds of creators. It’s certainly possible to imagine ways that AI apps could be useful for some individuals, depending on the way that they work. But YouTube’s “Third Party Training” setting moves beyond marketing and PR-speak and into the realm of misinformation.
In fact, YouTube creators as of this historical moment have no real control over whether or not their videos are being used to train AI models. It’s happening discreetly behind the scenes right now, and Google is either unable or unwilling to bring a stop to it independently. That’s just reality. Giving creators an opt-in button to press is such a meaningless gesture, it’s very nearly offensive.