TikTok accidentally posted a link to an internal beta version of its new AI tool, Symphony Avatars — and it went about as well as you can expect.
TikTok rolled out its Symphony Digital Avatars last week. The new tool lets brands generate AI influencers (based on real actors) and dub content straight from the app. Through the click of a few buttons, brands can get these ambassadors to say pretty much anything.
Only users with a TikTok Ads Manager account should hypothetically be able to access this feature. But on June 21, CNN’s Jon Sarlin found a loophole.
Due to a technical error, which remained unflagged for a few days, anyone with a personal TikTok account could use the tool. The resultant videos were unmoderated, unmarked, and a little disturbing.
In a thread shared to X, Sarlin noted there were “zero content restrictions” of the tool. He then shared a number of videos he made, which featured AI-generated avatars reciting Hitler’s “Mein Kampf,” white supremacist rhetoric, misinformation about elections, and Osama Bin Laden’s “Letter to America.”
In a statement to CNN, a TikTok spokesperson said a “technical error” had led to “an extremely small number of people” accessing an “internal testing version” of the tool.
“If CNN had attempted to upload the harmful content it created [to TikTok], this content would have been rejected for violating our policies,” the spokesperson added. “TikTok is an industry leader in responsible [AI-generated content] creation, and we will continue to test and build in the safety mitigations we apply to all TikTok products before public launch.”
According to Sarlin, “TikTok did not respond to a follow-up about what they are doing to prevent a mistake of this magnitude from happening again.”