AI’s Struggle
Last week, TikTok introduced a new set of tools for advertisers and brands, allowing them to create original ads starring AI-generated avatars designed to resemble real people. These so-called “Symphony Digital Avatars” are based on actual video footage featuring paid actors, whom TikTok hired and recorded expressly for this purpose. The plan, down the road, is to allow brands to either use these avatars based on these relatively obscure performers or to choose avatars resembling TikTok stars and notable influencers. Sort of a one-stop celebrity endorsement shop, without requiring the celebrity to take time out of their busy schedule.
Obviously, there’s a certain appeal here for a brand designing a new marketing campaign. Using AI-generated videos instead of filming a real actor gives them far more nuanced control over the finished project. The avatar can be placed in any setting. Can speak in any of a range of pre-selected languages. And both their visual appearance and their “performance” can be tweaked and altered on the fly without expensive or time-consuming reshoots.
The Symphony Digital Avatar system launched last week, with just one tiny little snag. CNN tech reporter Jon Sarlin quickly discovered that TikTok had opened the system up to everyone, not just brands, and had put no guardrails or safety measures in place. So any member of the public could upload just about any script for the AI-powered avatars to read. That includes typical brand messaging. But also content like Osama bin Laden’s “Letter to America.” White supremacist literature. Or excerpts from Hitler’s “Mein Kampf.”
Recall, these avatars are designed to resemble real people, not invented fictional personas. And the plan is to eventually use this system to make original ads featuring notable celebrity creators. So it gives individual bad actors some real power to put horrible and offensive words into the mouths of private citizens in a realistic and easy-to-share manner. Shortly after CNN posted a video of their findings, including avatars reading from “Mein Kampf,” TikTok took down the entire system. A spokesperson suggests this was the result of a “technical error, which has now been resolved.”
Growing Pains or Something More Serious?
These sorts of issues are not uncommon for new AI tools. Sometimes the results are innocuous and even funny. Twitter user @DrakeGatsby noted that a likely AI-generated Daily Mail article confused Natalie Portman’s infamous “Saturday Night Live” rap with a real confession about her past cocaine use, relating the lyrics as fact. (The actual article has since been updated, with no acknowledgment or retraction explaining the mistake.)
It’s also very likely one of the main uses of ChatGPT seems to be helping students with their homework. Though we only have one summer of data so far, it looks like the use of the chatbot drops off a cliff whenever school isn’t in session and then returns once September arrives. One Stanford study concludes that AI is not being used all that much to cheat in high schools. But these results were based on self-reporting by students themselves, or teachers catching students actually using a smartphone to cheat during an in-class exam. If you open up the definition of cheating to include, say, having ChatGPT write part of an essay for you or give you the answer to some of your math problems, we lack conclusive data.
That’s not great, but sometimes, the downsides of these AI tools are even more toxic and extreme than writing a few paragraphs of some kid’s “Romeo and Juliet” paper. Just this week, WIRED reported on a new trend in deepfake pornography, swapping the faces of celebrities and famous women into explicit videos from the defunct website GirlsDoPorn (GDP). This site was part of an infamous sex trafficking operation that has been broken up and prosecuted by the US Department of Justice. So the original women in the videos have now been victimized multiple times, first by the creators of GDP, and now again by deepfake creators using the original clips to produce doctored remakes.
Some of the deepfake videos feature watermarks from Akool, a US startup offering various generative AI services. Founder and CEO Jiajun Lu says that these videos violate the company’s terms of service. Which is all well and good, but didn’t stop bad actors from using their tools to make this content in the first place.
Worth It: AI Edition
By the admission of OpenAI executives themselves, AI apps are likely to put many artists and creatives out of work (though CTO Mira Murati isn’t convinced their jobs should have existed in the first place). So we’re not talking just about minor goofs or slip-ups here and there, but rather tens of thousands of people unemployed. Along with negative effects like the re-traumatization and exploitation of sex trafficking victims, the mass theft of copyrighted materials, doing kids homework for them, and more.
Yes, creating a truly ground-breaking, innovative, world-shifting new technology is no easy feat, and never a straight line. There will always be unexpected problems and issues and bumps in the road, and we can’t expect Generative AI apps to be any different. But, at the end of the day, is the final AI product going to be worth all of these compromises and sacrifices? Does anyone still believe AI is the future who doesn’t have a personal investment in the technology’s success?
I Don’t Want to Grow Up, I’m a Toys “R” Us Shareholder
This week, formerly bankrupt toy store Toys “R” Us released a new commercial that was almost entirely generated by the OpenAI text-to-video app Sora. It relates the story of the company’s founder, Charles Lazarus, who had a dream in which he envisioned the retail store and its mascot, Geoffrey the Giraffe. (I say “almost entirely” because the creative agency Native Foreign employed some real human artists to tweak the visuals, and human musicians composed the score.)
The ad was the brainchild of Toys “R” Us chief marketing executive Kim Miller, who came up with the concept after attending a “brand storytelling group.” She then joined forces with the creative agency Native Foreign, an early collaborator with OpenAI and Sora. These are all people with a vested interest in making stuff with AI: marketing executives who want to demonstrate that they’re on the cutting edge and OpenAI executives themselves. The ad was launched at Cannes Lions, where more marketing executives, Wall Street investors, and tech CEOs applauded their shared love of innovation, fresh ideas, and already owning Nvidia stock.
In the end, the ad has as much to do with promoting AI as actually getting people to shop at Toys “R” Us stores. Standalone Toys “R” Us stores don’t even exist anymore. Post-bankruptcy, the holding company that controls the brand has re-opened mini-locations within select Macy’s stores nationwide; there are about 45 of these total so far. So the experience being promoted by the ad doesn’t even actually exist!
CNN diplomatically notes that the reactions were “mixed” on social media, but it seems largely negative on a cursory glance, even when the video was posted by nominal AI enthusiasts. Comments have since been turned off on Toys “R” Us official YouTube copy of the video, suggesting it was also not met with universal praise.
That seems to largely be the story of AI for now. The tech industry moguls who own it, and the various other executives and managers from other industries who hope to save money by exploiting it, are all in and working overtime to convince the rest of us that it’s good and workable and here to stay. Meanwhile, the people actually being asked to work with and employ these tools – like writers, artists, and creators – continue rejecting them as unfinished, incomplete, unreliable, or worse.
On Adobe’s Discord, they ran a survey among dedicated users about the inclusion of Generative AI offerings within the Substance 3D animation and visualization tool. The overwhelmingly negative response from actual users tells this entire story visually. Adobe is selling this idea hard, but so far, their customers aren’t buying.