Over the past 12 months, Taylor Swift has evolved from your average popstar into arguably one of the biggest musical global phenomenons of our time. Her Eras Tour became the highest-grossing music tour ever, earning over $1 billion. She brought sports to her fan base by dating Kansas City Chiefs star Travis Kelce and re-released her entire discography to raucous acclaim. Her impact on industry and media can’t be understated, so when something happens to her, it goes mainstream — even if it’s happened to others.
Over the weekend, nonconsensual deepfaked images of Swift in sexually explicit poses were shared wildly on X, which most of us still call Twitter. Before the account that posted the deepfakes was suspended, the images were viewed 27 million times and amassed 260,000 likes in 19 hours, according to The Independent. The story about the images went far more viral than the images themselves, with dozens of outlets covering them.
Twitter has gone into full panic mode and seems to be trying to hide the damage on the platform. Searches for “Taylor Swift” and “Taylor AI” lead to a “something went wrong” error page on X, which Passionfruit confirmed. The site has dealt with a never-ending cascade of controversy, with advertisers fleeing due to the gutting of the platform’s content moderation teams and anti-semitism being pushed by its owner Elon Musk.
Deepfake images use artificial intelligence to replicate a person’s face on another body. A lot of the time, technology is used to place women in nonconsensual sexual activities, which is both gross and alarming. From Twitch streamers to the average teenager, deepfaked images can completely uproot a victim’s life and cause them massive amounts of distress.
Currently, there is little to no protection from those targeted by these images, with the law struggling to keep up with the constantly evolving online landscape.
“The viral Taylor Swift deep fake porn is horrifying but I just want to remind you guys that less famous women, and completely non-famous ones, have been victimized in this way for YEARS. And the platforms have done essentially nothing about it,” Rolling Stone reporter EJ Dickson tweeted on Jan. 25.
But some are hoping that now that Swift is involved, change can happen. When something happens to Swift and her fanbase (the Swifties) get involved, it hits the mainstream in ways that truly make a splash. In September, Swift shared a post on Instagram asking her 272 million followers to register to vote, leading to over 35,000 registrants.
Getting the White House to respond isn’t easy, but the government body released a statement about these deep fakes. “We are alarmed by the reports of the … circulation of images that you just laid out,” White House Press Secretary Karine Jean-Pierre told ABC News. “While social media companies make their own independent decisions about content management, we believe they have an important role to play in enforcing their own rules.”
Congressman Tom Kean of New Jersey used the Swift story on X to push the AI Labeling Act, which was introduced in November 2023 and would require AI-made work to be disclosed and additional government oversight for the companies that make these programs.
Last week, Rep. Joe Morelle also mentioned the Swift story in promoting the Preventing Deepfakes of Intimate Images Act, which allows deepfake victims to file a civil action against their perpetrators.
It doesn’t matter if you are a superstar or a nobody, deepfake images are unavoidable. Social media platforms currently have little incentive to keep their networks free of this objectionable content. As long as they are following lax government regulations and keeping advertisers somewhat happy, then they can keep on earning enough to keep the lights on.
But the victims whose stories grow every day are left with shattered lives, without privacy or safety. Hopefully, Swift’s story will amplify the warning alarm for something to be done about this.