Can We Protect Kids Without Ruining the Internet?

content moderation kids online safety
=
Shutterstock/sakkmesterke (Licensed)

In the latest edition of his weekly column, writer Lon Harris examines the latest efforts to clean up platforms with content moderation without eradicating free speech.


In a troubling clip posted over the weekend to Twitter/X, content creator “Sneako” (real name: Nicolas Kenn De Balinthazy) is seen getting mobbed in public by a group of young fans. The influencer is a fixture in right-wing internet communities, particularly those associated with the “red pill” movement, and so the kids are attempting to get their hero’s attention by repeating hate speech they presumably picked up via his streams. The boys shout misogynist, homophobic, and transphobic things at Sneako, who seemingly play-acts being shocked and appalled by what he has heard. (In case anyone is fooled by his performance in the video, later comments and videos posted to his social media channels demonstrate a general lack of remorse.)

The revelation that kids are being exposed to Sneako-type content isn’t happening in isolation. YouTube also just turned off monetization on Russell Brand’s channel, which itself has taken a hard-right and conspiratorial tone as of late. According to a new joint investigation by three U.K. publications, the actor and comedian was accused of sexually assaulting multiple women (one who was only 16). “Rick and Morty” co-creator Justin Roiland was also removed from his show following an arrest in January for felony domestic violence. (The charges were later dismissed due to insufficient evidence.) Then, last week, NBC News published an exposé revealing a disturbing pattern of Roiland using social media and dating apps to pursue young fans.

The internet is teeming with billions of people messaging each other and creating content, and true content moderation at this scale is neither straightforward nor practical. Hence why social media companies are protected under Section 230 from liability for illegal or hate-filled content users post on the internet.

Nonetheless, in the cases of larger and more prominent sites and communities — such as Google’s YouTube or Amazon’s Twitch — the option to do nothing no longer applies. From a public relations perspective, these businesses thrive on advertising, and most sponsors don’t wish to be associated with divisive, third-rail content like Sneako’s or Russell Brand’s.

Parental Controls Are Here if You Want Them

YouTube has been more aggressive than most in attempting to protect young viewers from sexual predators or content including hate speech. In 2015, the company set up a permanent walled garden, YouTube Kids — essentially an abridged, bespoke version of the site that allows younger users to browse freely (hopefully) without encountering the Sneakos of the world.

The main YouTube site also has extensive parental protections in place, for those seeking the ability to monitor what their child sees on the site. Parents can “dismiss” individual pieces of content by clicking “Not Interested,” which will then adjust the algorithms for what gets recommended to their children. There’s also a more holistic “Restricted Mode” parents can use to customize what kinds of content get served to their kids and what gets blocked. In 2021, YouTube also introduced Supervised Accounts for teenagers, which offer three levels of access from which parents can choose: the restrictive “Explore,” middle ground “Explore More,” and then the least-restrictive “Most of YouTube” option.

Amazon’s Twitch offers chat filters and moderation tools empowering the parents of teens to set some basic parameters for their kids before they start streaming content. The platform also has a Proactive Detection team which monitors Twitch streams for inappropriate content and takes the initiative to report offending streams, even if they haven’t been flagged by users. As for children under 13, Twitch takes essentially no steps to protect them, because they’re not supposed to be using the service anyway. The terms and conditions actually bar any users under 13 from logging in to Twitch.

But these tools only work if people use them. And kids not only circumvent their parents’ scrutiny — they skirt the platforms’ rules as well.

Law and Disorder

Under the federal Children’s Online Privacy Act, or COPPA, sites like YouTube are barred from collecting personal data about users under the of age 13 for advertising purposes. But that hasn’t actually stopped them. In 2019, YouTube and Google agreed to pay a record $170 million fine after the Federal Trade Commission and the State of New York discovered they were illegally collecting data from young users. As a result, Google vowed to stop serving any personalized ads on children’s videos. But just last month, new research from Adalytics and The New York Times seemed to indicate that the company was once again violating COPPA by tracking young users and serving them ads without parental consent.

Governments have stepped in at various points and forced the platforms’ hand. On Tuesday, the British government passed a sweeping new Online Safety Bill, with a number of new regulations designed to protect kids online. According to The New York Times, the 300-page bill is among the most far-reaching attempts to regulate online speech that we’ve seen from any Western democracy so far. Companies will be forced to proactively screen for objectionable and potentially illegal material, rather than only having to act after being alerted.

Platforms like TikTok, YouTube, Facebook, and Instagram will be required to introduce new features that give users robust options around particular kinds of content to block, such as videos featuring racism, misogyny, or antisemitism. As well, general restrictions have been placed on all content that may be harmful to children, such as promotion of suicide or self-harm, or certain discussions around eating disorders.

To Block or Not to Block?

This kind of proactive legislation is sweeping and aggressive, for sure, and it would likely limit the number of kids being exposed to hate speech and other harmful content. Nonetheless, this does raise a few red flags of its own. Any discussion of content moderation strikes a necessary balance between free speech and the protection of vulnerable groups like minors. 

Most people generally agree that adults should be able to largely make decisions about what kind of content to enjoy, with maybe minor restrictions around truly virulent content like, say, bomb-making instructions, medical misinformation, or inducements to commit acts of terrorism.

But beyond just the websites and tech companies opposing the U.K.’s Online Safety Bill, it was also condemned by some free speech advocates as overly restrictive, producing a wider “chilling effect” on online speech.

Online advocates have also argued similar laws could negatively impact LGBTQ+ children and those seeking reproductive healthcare in certain states. The Heritage Foundation, a conservative think tank, tweeted in May that they plan to use a similarly proposed “Kids Online Safety” bill in the U.S. to censor LGBTQ+ content, saying, “keeping trans content away from children is protecting kids.” In November 2022, 90+ human rights organizations published an open letter opposing the U.S. bill, saying it would actually “undermine” the protection of children online.

Moderating is Hard!

And the biggest enemy to attempts to supervise internet content remains the wide-open nature of the internet itself. Wikipedia owners the Wikimedia Foundation have said they’re simply unable to comply with the Online Safety Bill in the U.K., due to the vast amount of content being posted and updated to their website. Wikipedia could be blocked in the U.K. as a result. 

The hope that automated tools — potentially utilizing these fancy new AI applications we’ve heard so much about — could take on the bulk of the content moderation job has started to dwindle. At least so far, it doesn’t seem like they’re up to the task without a human counterpart, either letting too much offensive material through or blocking too much. 

And if you’re wondering how Sneako was able to dodge YouTube’s content moderation team and spread his message of hate to young people … he wasn’t. Both of Sneako’s YouTube channels were banned in October of 2022 for repeated violations of the terms of service, mostly due to posting extremist content, and he was barred from Twitch as well just two days after signing on to the platform. (He was similarly banned from Twitter in September 2022, after threatening to “break [the] face” of a fellow user, but was reinstated later that year.) 

Post-YouTube, Sneako has been growing his audience on Rumble, a right-wing Canadian streaming platform that hosts services for Donald Trump’s Truth Social platform. Rumble does claim to suppress some content for hate speech or “extremism.” Still, as the platform has marketed itself as a freer, more open alternative to restrictive sites like YouTube and Twitter, its moderation is far less aggressive than its counterparts.

So on some level, this does have to inevitably come back to users. Platforms can do a lot, but if your kids can still jump on an alternate platform with fewer rules, does that even matter? And if governments can only limit potential dangers by blocking entire categories of websites, even Wikipedia, is that a real solution?

These are tough questions! Too bad I’m out of space!

What do you know about content moderation? Have our next big story? Email [email protected] to get in touch with us.

Content for Creators.

News, tips, and tricks delivered to your inbox twice a week.

Newsletter Signup

Top Stories