The Internet Is Trash. Who’s Going To Clean It Up?


For years, writers have been calling out cyberspace for being full of garbage. It’s a permanent archive of all the weirdest, grossest, darkest, and most disturbing parts of ourselves — immortalized into offensive memes, snarky Reddit threads, and bad tweets. If you’re anything like me, you probably just want to curate your feed to be Snoopy/Sanrio memes, funny old Vines, Fantano music reviews, and then take all the other stuff and put it as far out of sight as possible.

But of course, taking out the trash doesn’t make it go away. Someone else has to lug your crap across town, across the ocean, into some trash island or landfill somewhere — in the internet’s case, it’s Facebook, Instagram, YouTube, TikTok, and Twitch moderators exiling users to 4chan, Kick, or Truth Social. Or, pushing people into the millions of disparate servers and blogs that don’t have any kind of regulatory influence whatsoever. The internet is vast, and people can get away with a lot in the junkyard.

This “take the trash out of sight” strategy works in the short term, of course. Us normies who live on Instagram and TikTok don’t have to see it, smell it, address it. But putting horrible things into ever-piling corners of the world (or the internet) isn’t going to stop the source.

We realized that with Trump, right? The loudest liberal-leaning Twitter users didn’t want him around. Yet removing him from our Twitter (and Facebook) timelines did little to stop his reach or power. In case you missed it this week, he’s ahead in the polls for the 2024 election. (It’s Nov. 7, by the way, Happy Election Day!) 

Look, do I think Trump should’ve been banned from social media? Uh… yeah. He fanned the flames of a chud-y riot, intimidated the judiciary branch, threatened journalists, etc, etc. The “free speech” argument is at times used as a red herring, distracting from the consequences of poor conduct. But the point is, banning Trump did nothing to stop him from creating endless amounts of content in more isolated corners of the internet. If anything it’s taught us that siphoning off his millions of followers to remote areas where they’ll be further radicalized with no checks and balances only allows them to further cultivate their views. 

Perhaps then it’s better to take a different, more hands-on approach towards curating a healthy public digital space. But the question of who exactly is responsible for the health of the internet has been around since its birth. There have been some peaks in the debate surrounding the Adpocaplyse, GamerGate, the 2016 election, and the pandemic. The rise in faulty AI content-flagging systems drives endless controversy. Elon Musk’s free-speech-absolutist makeover of Twitter also renewed interest.

But this week in particular is seeing a surge in debates on the topic. Twitch, for example, decided last week it would take a “rehabilitative approach” to reinstating and educating banned creators. In other news, after being sued, YouTube decided it would impose restrictions on video recommendations to teenagers related to sensitive topics (like body image). The Biden administration is now advocating for heavier regulation of the use of AI on social media platforms. And the Supreme Court is hearing three critical cases this week on the subject of content moderation — including whether banning people with “contentious views” violates the First Amendment.

We already know that platforms don’t like to be regulated, but they hate bad and costly regulations even more. Yesterday, UNESCO (an international organization by the United Nations that regularly comments on international human rights standards) released global guidelines on how to regulate misinformation and hate speech on the internet, after consulting with social media platforms and industry leaders. As they currently stand, the guidelines are vague and non-enforceable. However, according to Guilherme Canela, Head of UNESCO’s Freedom of Expression and Safety of Journalists section, content creators from various regions across the world engaged in the consultation process for the guidelines, demonstrating a particular internet in flagging harmful content by social media platforms.


A lack of transparency on platforms’ decision-making processes strikes me as one of the most stressful issues for creators. After all, when your income is riding on a platform’s whims, the worst nightmare is to lose your channel or have your content taken down and not know why. And platforms are not great at communicating with creators. 

Twitch streamer Hasan Piker, for example, said Twitch didn’t want to have to take a stand on why he was temporarily banned on the platform, seemingly over the use of the word “cracker.” Twitch has a policy to not explicitly publicly comment on why they ban creators. Other platforms, like YouTube and TikTok, also are notorious for being vague in identifying the specific reasons behind channel bans, strikes, suspensions, and rejected appeals.

Personally, I think there should be more conversations about how to take a restorative justice approach to the internet, rather than a punitive one. Platforms rarely appear to involve victims of harm in their content moderation processes and communicate decisions to the public at large. While some people (trolls, spammers, scammers, truly evil people, etc) may seem incapable of change, many people are. Having a process geared toward education, transparency, accountability, healing, and repair would be revolutionary for the internet.

This includes rigorous training and mental health care support for the moderators themselves, who are often subjected to traumatizing content every day and do not have the resources they need to succeed. For example, this horrifying story broke in 2019 about the poor working conditions of Facebook moderators overseas. The internet is a vast, untamed ecosystem with huge swathes of horrible content being created each day — but Facebook in 2020 was estimated to have an annual revenue of $70.7 billion, which is comparable to the GDP of Venezuela, so it’s not all that unreasonable to expect it to invest in these resources.

According to a recent survey by UNESCO, 90% of people (across 16 countries) believe online disinformation and fake news are serious issues that need to be addressed by social media platforms. 88% of people also said they think the government should play a role in regulating the platforms.

That said, our public officials have shown they don’t understand how the internet works, and the major social media platforms certainly have a profit motive to push out viral content regardless of human rights violations, so the situation is a bit bleak.

But while responsible content moderation may be complicated and costly, it’s more expensive for platforms to alienate advertisers, creators, and users, risking the fall of their empires. Twitch’s new rehabilitative approach seems to be a step in the right direction, signaling an investment in mediation teams as opposed to solely punitive moderation teams. And more attention, skepticism, and resources are going toward content moderation AI tools. For example, Discord, Google, Meta, Snap, and Twitch announced yesterday a joint push to test out new tools to combat abuse online. Perhaps there’s even some hope for a decentralized approach to the internet, with some loving labor from our friendly neighborhood mods.

There is some incentive to clean up the internet, it’ll just take some pressure to do it in the right way.


Twitch Announces Rehabilitation for Banned Streamers

If it’s in your head, it’s on Shutterstock

You never have to compromise your creative vision when you use Shutterstock. With all-new creative AI-powered editing features and a library of 700 million stock images, you’ll find everything you need to make your project stand out. Now through November 20, get 20% off sitewide with code STANDOUT. 



TikTok’s New Creator Program Is Just as Vague as the Old One

The infamous Creator Fund is retiring, being replaced by the Creativity Program beta, which vaguely claims to offer larger payouts for longer videos.

By Steven Asarch, Passionfruit Contributor

tiktok creativity program beta


twitch advertising twitch ads

From Big Oil to the U.S. Navy: Some Interesting Advertisers Are Recruiting Young People Through Twitch

Oil companies, insurance sellers, and military moguls are all taking to Twitch to reach a lucrative younger audience.

By Steven Asarch, Passionfruit Contributor


Content for Creators.

News, tips, and tricks delivered to your inbox twice a week.

Newsletter Signup

Latest Newsletters

  • How a YouTube Personality Unmasks Anonymous Critics

    How a YouTube Personality Unmasks Anonymous Critics

    CREATOR NEWSLETTER Issue #255 | July 16, 2024 – Grace Stanley, Deputy Editor PERSONALITIES Multiple Creators Accuse YouTuber TechLead of Copyright Abuse ‘He’s been abusing this tool for a while, and I think it’s time that YouTube takes some sort of action.’ By Steven Asarch, Passionfruit Contributor → READ THE FULL STORY SPONSORED Transform your creator…

  • One Day Left to Apply for White House’s Creator Economy Conference

    One Day Left to Apply for White House’s Creator Economy Conference

    CREATOR NEWSLETTER Issue #254 | July 11, 2024 THE COMMENTS SECTION “Influencer/hustle culture has created subsections of the entertainment industry that are causing large companies to divest from traditional outlet and invest in influencers BECAUSE IT IS EASY. Because they can almost guarantee a win/positive reaction. It’s pure corpo manipulation” – Drew Grant, Editorial Director…

  • Copyright, Copyright, Copyright

    Copyright, Copyright, Copyright

    CREATOR NEWSLETTER Issue #253 | July 9, 2024 – Grace Stanley, Deputy Editor PERSONALITIES YouTuber Sssniperwolf Catches Stray Bullet From Allegedly Falsified Copyright Strike A smaller creator received a copyright strike for his video about Sssniperwolf. But it wasn’t her who submitted the claim.  By Steven Asarch, Passionfruit Contributor → READ THE FULL STORY SPONSORED START…