For years, writers have been calling out cyberspace for being full of garbage. It’s a permanent archive of all the weirdest, grossest, darkest, and most disturbing parts of ourselves — immortalized into offensive memes, snarky Reddit threads, and bad tweets. If you’re anything like me, you probably just want to curate your feed to be Snoopy/Sanrio memes, funny old Vines, Fantano music reviews, and then take all the other stuff and put it as far out of sight as possible.
But of course, taking out the trash doesn’t make it go away. Someone else has to lug your crap across town, across the ocean, into some trash island or landfill somewhere — in the internet’s case, it’s Facebook, Instagram, YouTube, TikTok, and Twitch moderators exiling users to 4chan, Kick, or Truth Social. Or, pushing people into the millions of disparate servers and blogs that don’t have any kind of regulatory influence whatsoever. The internet is vast, and people can get away with a lot in the junkyard.
This “take the trash out of sight” strategy works in the short term, of course. Us normies who live on Instagram and TikTok don’t have to see it, smell it, address it. But putting horrible things into ever-piling corners of the world (or the internet) isn’t going to stop the source.
We realized that with Trump, right? The loudest liberal-leaning Twitter users didn’t want him around. Yet removing him from our Twitter (and Facebook) timelines did nothing to stop his reach or power. In case you missed it this week, he’s ahead in the polls for the 2024 election. (It’s Nov. 7, by the way, Happy Election Day!)
Look, do I think Trump should’ve been banned from social media? Uh… yeah. He fanned the flames of a chud-y riot, intimidated the judiciary branch, threatened journalists, etc, etc. The “free speech” argument is at times used as a red herring, distracting from the consequences of poor conduct. But the point is, banning Trump did nothing to stop him from creating endless amounts of content in more isolated corners of the internet. If anything it’s taught us that siphoning off his millions of followers to remote areas where they’ll be further radicalized with no checks and balances only allows them to further cultivate their views.
Perhaps then it’s better to take a different, more hands-on approach towards curating a healthy public digital space. But the question of who exactly is responsible for the health of the internet has been around since its birth. There have been some peaks in the debate surrounding the Adpocaplyse, GamerGate, the 2016 election, and the pandemic. The rise in faulty AI content-flagging systems drives endless controversy. Elon Musk’s free-speech-absolutist makeover of Twitter also renewed interest. But this week in particular is seeing a surge in debates on the topic.
Twitch, for example, decided last week it would take a “rehabilitative approach” to reinstating and educating banned creators. YouTube (after being sued) decided it would impose restrictions on video recommendations to teenagers related to sensitive topics (like body image). The Biden administration is now advocating for heavier regulation of the use of AI on social media platforms. And the Supreme Court is hearing three critical cases this week on the subject of content moderation — including whether banning people with “contentious views” violates the First Amendment.
We already know that platforms don’t like to be regulated, but they hate bad and costly regulations even more. On Monday, UNESCO (an international organization by the United Nations that regularly comments on international human rights standards) released global guidelines on how to regulate misinformation and hate speech on the internet, after consulting with social media platforms and industry leaders. As they currently stand, the guidelines are vague and non-enforceable. However, according to Guilherme Canela, Head of UNESCO’s Freedom of Expression and Safety of Journalists section, content creators from various regions across the world engaged in the consultation process for the guidelines, demonstrating a particular internet in flagging harmful content by social media platforms.
“The main takeaway of their inputs was about the importance of making sure that the guidelines had enough safeguards for freedom of expression by promoting digital platforms transparency and accountability in general, but particularly in the use of advertising as part of their business model and the need to change the logic of recommendation mechanisms.“—UNESCO’s Guilherme Canela.
A lack of transparency on platforms’ decision-making processes strikes me as one of the most stressful issues for creators. After all, when your income is riding on a platform’s whims, the worst nightmare is to lose your channel or have your content taken down and not know why. And platforms are not great at communicating with creators.
Twitch streamer Hasan Piker, for example, said Twitch didn’t want to have to take a stand on why he was temporarily banned on the platform, seemingly over the use of the word “cracker.” Twitch has a policy to not explicitly publicly comment on why they ban creators. Other platforms, like YouTube and TikTok, also are notorious for being vague in identifying the specific reasons behind channel bans, strikes, suspensions, and rejected appeals.
Personally, I think there should be more conversations about how to take a restorative justice approach to the internet, rather than a punitive one. Platforms rarely appear to involve victims of harm in their content moderation processes and communicate decisions to the public at large. While some people (trolls, spammers, scammers, truly evil people, etc) may seem incapable of change, many people are. Having a process geared toward education, transparency, accountability, healing, and repair would be revolutionary for the internet.
This includes rigorous training and mental health care support for the moderators themselves, who are often subjected to traumatizing content every day and do not have the resources they need to succeed. For example, this horrifying story broke in 2019 about the poor working conditions of Facebook moderators overseas. The internet is a vast, untamed ecosystem with huge swathes of horrible content being created each day — but Facebook in 2020 was estimated to have an annual revenue of $70.7 billion, which is comparable to the GDP of Venezuela, so it’s not all that unreasonable to expect it to invest in these resources.
According to a recent survey by UNESCO, 90% of people (across 16 countries) believe online disinformation and fake news are serious issues that need to be addressed by social media platforms. 88% of people also said they think the government should play a role in regulating the platforms.
That said, our public officials have shown they don’t understand how the internet works, and the major social media platforms certainly have a profit motive to push out viral content regardless of human rights violations, so the situation is a bit bleak.
But while responsible content moderation may be complicated and costly, it’s more expensive for platforms to alienate advertisers, creators, and users, risking the fall of their empires. Twitch’s new rehabilitative approach seems to be a step in the right direction, signaling an investment in mediation teams as opposed to solely punitive moderation teams. And more attention, skepticism, and resources are going toward content moderation AI tools. For example, Discord, Google, Meta, Snap, and Twitch announced yesterday a joint push to test out new tools to combat abuse online. Perhaps there’s even some hope for a decentralized approach to the internet, with some loving labor from our friendly neighborhood mods.
There is some incentive to clean up the internet, it’ll just take some pressure to do it in the right way.
What is your experience with content moderation? Email [email protected] to share your story.