On April 22, journalist Emanuel Maiberg of 404 Media published a disturbing investigation into Instagram’s ad ecosystem. The report revealed that Instagram is profiting from multiple ads that explicitly invite people to create nonconsensual deepfake nudes with AI apps. One particularly egregious ad had a picture of Kim Kardashian with text that read, “Undress any girl for free. Try It.”
These ads ran on Facebook, Instagram, Facebook Messenger, and Meta’s in-app ad network from April 10 to 20. The report is particularly troubling given that deepfakes are an issue particularly impacting teenage girls — a demographic Meta has repeatedly been criticized for harming.
Only a few states have any laws addressing deepfakes. However, Meta reps have said it does ban nonconsensual deepfakes. The company also bans ads containing adult content, saying it uses a mix of AI detection systems and human review to identify this content. Meta deleted these deepfake ads after 404 Media ran its story. But it clearly failed to detect and address them properly on its own.
So why did this happen? Well, as 404 puts it, it’s clear Instagram is either “unwilling or unable” to enforce its own policies about AI. And perhaps that’s because it has some twisted priorities when it comes to the AI tools it is investing in.
Meta struggles with content moderation
“Instagram and Facebook are both decaying platforms that don’t just enable criminal behavior but actively profit from it and either don’t know or don’t care how to stop it,” Jason Koebler of 404 Media said on X. “Content moderation is hard. But Meta is at a point now where they are regularly unable to find obvious illegal content unless a journalist sends them a direct link to it.”
This is hardly the first time Meta has failed to enforce its own policies when it comes to AI. Back in March, NBC News found that it was running other ads for sexually explicit deepfake services. One even advertised blurred nude image of an underage actress.
“Companies like Meta will say they’re not responsible for user-generated content — they don’t make it, they just moderate it,” NBC Reporter Kat Tenbarge said in a tweet. “The ad stuff is even more egregious. It suggests that companies like Meta can’t even keep track of who is paying them and the content their apps endorse.”
In addition, on April 16, Meta’s Oversight Board (its content moderation policy council), announced investigations into two different cases about how Instagram and Facebook’s content moderation systems failed to detect and delete explicit deepfakes of public figures. According to TechCrunch, in one case, it took multiple reports from users before Meta took down the photos.
Meta’s biggest AI push yet
Meanwhile, Meta is throwing its full weight behind AI-powered tools that make it quicker and easier for its users to produce more content than ever before. We’re in the midst of Meta’s biggest AI push yet. If you used Instagram or Facebook this week, you’ve probably noticed that its AI “smart assistants” are now replacing the search bar.
Meta’s latest push into AI is specifically targeting creators. On April 15, the New York Times revealed Instagram is privately courting top creators with a new AI program. Its name is “Creator A.I.”
“Creator A.I.” will use creators’ data to make a personalized chatbot. The bot replicates a creator’s voice, likeness, and mannerisms to message fans in DMs and comments automatically.
Back in September 2023, we reported on Meta’s attempts to create fake AI influencers based on celebrities. MrBeast was paid millions to participate. His fake persona, for example, is named comedyzach. (He has been posting some very odd AI-generated images of AI cats and sloths filing taxes for his 35K followers.)
How have creators responded to Meta AI?
So how are creators reacting to all of Meta’s AI buzz? Some have recently expressed general annoyance or disdain for how much AI has taken over every aspect of the platform.
Post by @thecultureofmeView on Threads
Creators the Times spoke with about the new “Creator A.I.” program were hesitant and worried about how a Meta AI bot could put “words in their mouth that they don’t agree with.” Others “balked” at the idea that a chatbot could replace their authentic interactions with fans, which seems to be a common sentiment online.
At recent events I’ve attended, I’ve heard whisperings of more grave concerns. These range from fears of racism embedded in AI to the entire industry becoming even more difficult to survive in.
“I’m worried about AI stunting creativity, replacing the need to use our brains,” creator Jessica Morrobel told Insider this month. “It has the ability to pull your image and likeness and generate material. So what will be the need for us at that point?”
Still, there’s a lot of buzz and excitement about AI online, particularly among the upper echelon of creators. A 2024 Kajabi report found that six-figure-earning creators were twice as likely as other creators to use AI. Big-name creators like Amouranth are constantly hyping up how AI chatbots can earn them extra cash for little manpower.
What does Meta have to gain?
Still, it’s worth asking, why is Meta, in particular, trying to recruit these massive creators to shell out AI content? What do they have to gain? Is it to provide genuine value for creators, as it claims? Or is Meta trying to marginalize and make obsolete the very thing that made it what it is today?
And most importantly, are creators really banding together to address the potential concerns and pitfalls that these new technologies pose?
If this latest news from 404 Media shows us anything, it’s that the worst consequences of AI are not hidden in some dark underbelly of the internet. They are not just in obscure 4chan communities and the dark web. These harms are being massively pushed out in the daylight by one of the biggest, richest, most powerful companies in the world.