Banning content creators from social media platforms, aka deplatforming, is controversial. Creators who have been on the receiving end rail against it. Others believe it is a vital method of keeping online conversations civil. Some acknowledge its successes and failures at the same time. Platforms have banned enormous names not just in the creator space but worldwide — including at least one former US president.
A new analysis looks not just at the impact of social media bans on a single platform but across the whole online ecosystem. It claims to finally have a definitive answer as to whether deplatforming someone impacts their ability to draw attention.
Manoel Horta Ribeiro, a Swiss Federal Institute of Technology social media researcher, and a team of researchers led the report. They carefully analyzed the aftermath of 165 deplatforming events affecting 101 creators worldwide. Platforms studied included Facebook, Twitter, YouTube, and Instagram.
The researchers then examined the public’s interest in those creators by Google search and Wikipedia hits following a ban. They found that deplatforming a creator resulted in a 63% drop in Google searches for an individual and a 43% decrease in Wikipedia hits.
“We used all-encompassing signals that capture attention across the web,” Horta Ribeiro told Passionfruit.
The analysis aimed to answer whether deplatformed individuals simply managed to pick up where they left off on an alternative platform. The answer was that they may migrate platforms, often to more fringe ones, including Gab and Rumble. But overall, attention drawn to them decreases across the whole web.
However, not all deplatforming events are equal. “The reason behind your deplatforming is correlated with the effect,” Horta Ribeiro said. Banned creators who spread fake news or disinformation saw a greater drop in attention than those who shared hate speech.
The risks involved in deplatforming
Many who find themselves, or people they like, on the wrong side of deplatforming often worry about unjust bans.
Such a significant impact on recognition can have a dangerous effect. Especially when coupled with a frequent lack of transparency about how platforms decide when to ban or block users. For example, Black creators have claimed TikTok has unduly barred them. Disabled users have also criticized TikTok in the past for similar issues.
It’s the reason why TikTokers and Instagram influencers will often use words like “seggs” instead of “sex,” “unalive” instead of “dead,” or alternative words like “panini” to describe the pandemic. Creators call this language “algospeak.” It is part of a game of whispers about what platforms allow and don’t allow.
“Considering that deplatforming is often used against the wrong audiences, like sex workers just trying to make a living, or sex educators or activists, it’s a huge power for platforms to have that they use sparingly. And even when they do use it, it’s concerning who it targets,” explains Carolina Are, a content moderation researcher at Northumbria University’s Centre for Digital Citizens.
Karl, the owner of a London-based kink club called Klub Verboten that uses Instagram to advertise its regular club nights, often uses code words to avoid deplatforming. Meta, the parent company of Instagram, banned his official Instagram account in June 2023 for no apparent reason.
Meta reversed the ban after journalists contacted the company. It admitted that it removed “a number of the accounts” in error.
“When it comes to deplatforming, I think it’s quite harsh,” Karl told Passionfruit. “You can eradicate someone on a platform in a morning.”
Karl noted that we’re all encouraged by both platforms and society at large to develop a presence on social media. But that “can just get erased” overnight. Karl wishes that there had been “more thought process and an easier approach when it comes to deplatforming” his account.
Stories like Karl’s are why some are still cautious about giving social media platforms more power to step in and ban people from their platforms — even if only temporarily. The labyrinthine process of appealing account bans is so difficult that ordinary users often can’t get a fair hearing. Unless they appeal to the media to cover their case.
Researcher Carolina Are herself, a content moderation expert with contacts within many of the big platforms, has repeatedly faced social media bans for posting videos of her pole dancing. She only had her account restored after raising it with the media.
The case for more intervention
Overall, reflecting on his latest study, Horta Ribeiro believes it would be good for platforms to intervene more. But more than that, he believes it highlights the importance of consistency in who and how they choose to deplatform users.
“I think a lot of the problem around platforms is that they make very strict guidelines for some specific things, and they enforce them very unevenly and very arbitrarily,” he said.
However, the good news is Horta Ribeiro’s research suggests that the effects of temporary social media bans are less severe than permanent ones. “There’s a lot of cases where these are temporary events,” he said.
In cases where platforms have taken away access from individuals but later restored it, their comparative popularity across the entire web has returned, too.
Horta Ribeiro suggests that temporary cooling-off periods could be a good halfway house. It could allow platforms to intervene more readily against those spreading hate speech or disinformation, while not rapidly redrawing their policies or facing criticism.
“This could be a good compromise for platforms to try to have this month-long, or a couple of months long, deplatforming instead of a very, very long, undefined intervention,” he added.
That means that, for instance, Donald Trump’s temporary bans from various social media platforms in the aftermath of the Jan. 6 insurrection, could become more commonplace to de-escalate situations.
How to enact deplatforming
While the data suggests that deplatforming works, convincing ordinary people of its efficacy still requires some work.
Taking a more interventionist approach doesn’t necessarily chime with general public opinion. A separate study published in January 2024 suggested limited public demand for content moderation.
However, public opinion depends on the person targeted. Overall, 40% of people shown a “threatening” post in the study said social media companies should do nothing. In contrast, two-thirds said social media companies don’t need to take action against an “intolerant” or “uncivil” post.
And, not everyone believes that the recent paper from the Swiss Federal Institute of Technology conclusively shows the efficacy of deplatforming.
“I would take the results with a pinch of salt simply because, yes, of course, if someone relies only on a specific social network and then is deplatformed, then of course, they lose their main channel,” researcher Are noted.
Are points to people like Andrew Tate or Donald Trump. These are two prominent examples of people who turned their deplatforming into a recruitment tactic to drive more support.
“It’s not like those people in particular don’t have access to other forms of communications,” Are noted. “They have big platforms outside of the platforms that deplatform them.”
Trump managed to maintain his support through his own social media platform, Truth Social. Other deplatformed creators maintained some element of their base through platforms like Gab and Parler. Some deplatformed streamers from Twitch, like Adin Ross and Destiny, found new homes on less mainstream platforms Kick and Rumble.
But even after being on the wrong end of social media bans, Klub Verboten’s Karl believes that there’s still a case for responsible deplatforming.
“I think it’s very much a struggle of this time, and I’m not sure if anyone has found the answer to it,” Karl added. “Because the question is, where do we draw the line?”
Further Reading: