The Internet Is Trash. Who’s Going To Clean It Up?

CREATOR NEWSLETTER


For years, writers have been calling out cyberspace for being full of garbage. It’s a permanent archive of all the weirdest, grossest, darkest, and most disturbing parts of ourselves — immortalized into offensive memes, snarky Reddit threads, and bad tweets. If you’re anything like me, you probably just want to curate your feed to be Snoopy/Sanrio memes, funny old Vines, Fantano music reviews, and then take all the other stuff and put it as far out of sight as possible.

But of course, taking out the trash doesn’t make it go away. Someone else has to lug your crap across town, across the ocean, into some trash island or landfill somewhere — in the internet’s case, it’s Facebook, Instagram, YouTube, TikTok, and Twitch moderators exiling users to 4chan, Kick, or Truth Social. Or, pushing people into the millions of disparate servers and blogs that don’t have any kind of regulatory influence whatsoever. The internet is vast, and people can get away with a lot in the junkyard.

This “take the trash out of sight” strategy works in the short term, of course. Us normies who live on Instagram and TikTok don’t have to see it, smell it, address it. But putting horrible things into ever-piling corners of the world (or the internet) isn’t going to stop the source.

We realized that with Trump, right? The loudest liberal-leaning Twitter users didn’t want him around. Yet removing him from our Twitter (and Facebook) timelines did little to stop his reach or power. In case you missed it this week, he’s ahead in the polls for the 2024 election. (It’s Nov. 7, by the way, Happy Election Day!) 

Look, do I think Trump should’ve been banned from social media? Uh… yeah. He fanned the flames of a chud-y riot, intimidated the judiciary branch, threatened journalists, etc, etc. The “free speech” argument is at times used as a red herring, distracting from the consequences of poor conduct. But the point is, banning Trump did nothing to stop him from creating endless amounts of content in more isolated corners of the internet. If anything it’s taught us that siphoning off his millions of followers to remote areas where they’ll be further radicalized with no checks and balances only allows them to further cultivate their views. 

Perhaps then it’s better to take a different, more hands-on approach towards curating a healthy public digital space. But the question of who exactly is responsible for the health of the internet has been around since its birth. There have been some peaks in the debate surrounding the Adpocaplyse, GamerGate, the 2016 election, and the pandemic. The rise in faulty AI content-flagging systems drives endless controversy. Elon Musk’s free-speech-absolutist makeover of Twitter also renewed interest.

But this week in particular is seeing a surge in debates on the topic. Twitch, for example, decided last week it would take a “rehabilitative approach” to reinstating and educating banned creators. In other news, after being sued, YouTube decided it would impose restrictions on video recommendations to teenagers related to sensitive topics (like body image). The Biden administration is now advocating for heavier regulation of the use of AI on social media platforms. And the Supreme Court is hearing three critical cases this week on the subject of content moderation — including whether banning people with “contentious views” violates the First Amendment.

We already know that platforms don’t like to be regulated, but they hate bad and costly regulations even more. Yesterday, UNESCO (an international organization by the United Nations that regularly comments on international human rights standards) released global guidelines on how to regulate misinformation and hate speech on the internet, after consulting with social media platforms and industry leaders. As they currently stand, the guidelines are vague and non-enforceable. However, according to Guilherme Canela, Head of UNESCO’s Freedom of Expression and Safety of Journalists section, content creators from various regions across the world engaged in the consultation process for the guidelines, demonstrating a particular internet in flagging harmful content by social media platforms.


THE COMMENTS SECTION


A lack of transparency on platforms’ decision-making processes strikes me as one of the most stressful issues for creators. After all, when your income is riding on a platform’s whims, the worst nightmare is to lose your channel or have your content taken down and not know why. And platforms are not great at communicating with creators. 

Twitch streamer Hasan Piker, for example, said Twitch didn’t want to have to take a stand on why he was temporarily banned on the platform, seemingly over the use of the word “cracker.” Twitch has a policy to not explicitly publicly comment on why they ban creators. Other platforms, like YouTube and TikTok, also are notorious for being vague in identifying the specific reasons behind channel bans, strikes, suspensions, and rejected appeals.

Personally, I think there should be more conversations about how to take a restorative justice approach to the internet, rather than a punitive one. Platforms rarely appear to involve victims of harm in their content moderation processes and communicate decisions to the public at large. While some people (trolls, spammers, scammers, truly evil people, etc) may seem incapable of change, many people are. Having a process geared toward education, transparency, accountability, healing, and repair would be revolutionary for the internet.

This includes rigorous training and mental health care support for the moderators themselves, who are often subjected to traumatizing content every day and do not have the resources they need to succeed. For example, this horrifying story broke in 2019 about the poor working conditions of Facebook moderators overseas. The internet is a vast, untamed ecosystem with huge swathes of horrible content being created each day — but Facebook in 2020 was estimated to have an annual revenue of $70.7 billion, which is comparable to the GDP of Venezuela, so it’s not all that unreasonable to expect it to invest in these resources.

According to a recent survey by UNESCO, 90% of people (across 16 countries) believe online disinformation and fake news are serious issues that need to be addressed by social media platforms. 88% of people also said they think the government should play a role in regulating the platforms.

That said, our public officials have shown they don’t understand how the internet works, and the major social media platforms certainly have a profit motive to push out viral content regardless of human rights violations, so the situation is a bit bleak.

But while responsible content moderation may be complicated and costly, it’s more expensive for platforms to alienate advertisers, creators, and users, risking the fall of their empires. Twitch’s new rehabilitative approach seems to be a step in the right direction, signaling an investment in mediation teams as opposed to solely punitive moderation teams. And more attention, skepticism, and resources are going toward content moderation AI tools. For example, Discord, Google, Meta, Snap, and Twitch announced yesterday a joint push to test out new tools to combat abuse online. Perhaps there’s even some hope for a decentralized approach to the internet, with some loving labor from our friendly neighborhood mods.

There is some incentive to clean up the internet, it’ll just take some pressure to do it in the right way.


PLATFORMS

Twitch Announces Rehabilitation for Banned Streamers


If it’s in your head, it’s on Shutterstock

You never have to compromise your creative vision when you use Shutterstock. With all-new creative AI-powered editing features and a library of 700 million stock images, you’ll find everything you need to make your project stand out. Now through November 20, get 20% off sitewide with code STANDOUT. 


IN THE BIZ


MONETIZATION

TikTok’s New Creator Program Is Just as Vague as the Old One

The infamous Creator Fund is retiring, being replaced by the Creativity Program beta, which vaguely claims to offer larger payouts for longer videos.

By Steven Asarch, Passionfruit Contributor

tiktok creativity program beta

ADVERTISING

twitch advertising twitch ads

From Big Oil to the U.S. Navy: Some Interesting Advertisers Are Recruiting Young People Through Twitch

Oil companies, insurance sellers, and military moguls are all taking to Twitch to reach a lucrative younger audience.

By Steven Asarch, Passionfruit Contributor


YOUTUBE MADE ME DO IT

Content for Creators.

News, tips, and tricks delivered to your inbox twice a week.

Newsletter Signup

Latest Newsletters

  • 🙌 10 Creators To Follow This Black History Month

    🙌 10 Creators To Follow This Black History Month

    CREATOR ECONOMY NEWSLETTER Issue #111 | Feb. 23, 2023 Black History Month is celebrated every February, but in the creator economy, there are trailblazers who are redefining what it means to be an influencer every day, week, and month of the year.  To celebrate these Black role models and mold-breaking artists on social media, the…

  • 🎬 The Actress Behind M3GAN Is a Creator

    🎬 The Actress Behind M3GAN Is a Creator

    CREATOR ECONOMY NEWSLETTER Issue #110 | Feb. 21, 2023 M3GAN, a comedy-horror movie about an artificial intelligence doll, saw viral success through social media marketing since its December 2022 debut.  Behind the voice of the leading doll character is 18-year-old actress Jenna Davis (@itsjennadavis), who is also a creator and influencer. Davis garnered millions of…

  • 💍 The Perils of Influencer Divorces

    💍 The Perils of Influencer Divorces

    CREATOR ECONOMY NEWSLETTER Issue #109 | Feb. 16, 2023 Influencer public breakups and divorces tend to get messy. A recent example was in September 2022, when Ned Fulmer, a member of the YouTube collective Try Guys, faced viral scrutiny for an extra-marital affair he had with a co-worker. The high-stakes story led many fans to…