Rapper Megan Thee Stallion has spoken out after being targeted with a deepfake pornographic video over the weekend. According to NBC, there are at least 18 posts of the video circulating on X, with six of these having more than 30,000 views apiece.
In a post on June 8 with over 12 million views, the musician said: “It’s really sick how yall go out of the way to hurt me when you see me winning. Yall going too far, fake ass shit. Just know today was your last day playing with me and I mean it.”
Searches for “Megan Thee Stallion AI” lead to a “something went wrong” error page on X, Passionfruit confirmed. In a statement to Glamour, a representative for X said that the platform’s rules “prohibit the sharing of non-consensual intimate media” and alleged it is “proactively removing this content.”
A disturbing pattern
This isn’t the first time a celebrity has been targeted with deepfake content. In 2023, streamers Amouranth and QT Cinderella spoke out about being targeted. Then, in January, Taylor Swift was on the receiving end of a similar campaign on X.
According to The Independent, the deepfake images of Swift were viewed 27 million times before being deleted. In the aftermath, numerous experts and creators raised concerns about the lack of legal protections for those targeted by deepfakes.
While there is a proposed federal bill dedicated to stopping deepfakes in the U.S. — the Preventing Deepfakes of Intimate Images Act — it hasn’t been passed as a law just yet. Currently, at least 14 states in the U.S. have passed addressing nonconsensual sexual deepfakes. But overall, legislation is limited.
The data suggests we have a real problem with deepfake AI pornography. According to research company Sensity AI via the MIT Technology Review, 90% and 95% of deepfake videos are non-consensual pornography. Of that, 90% of the content is of women.
The question is, how much longer will this go on until something is done about it? If Megan Thee Stallion and Taylor Swift fall victim to something like this, it paints a pretty grim picture about how regular people will fare — or, in other words, those without a whole social media and legal team backing them.
It feels like nothing has changed in terms of practical protections against deepfake content from platforms. Between January and now, X itself has clearly not done enough to curb this kind of content.