If you’re a creator, you may have heard Section 230 in the news lately. But what do you need to know about the law and its implications, and how might it affect your life? What does section 230 mean, and what does it protect?
To better understand, we have to go back to 1996—in the days of dial-up internet and in a time just before AOL Instant Messenger was about to explode in popularity.
What is Section 230?
Section 230 of the Communications Decency Act (CDA) is a 1996 U.S. law that provides a shield for internet platforms from legal liability that might come about for hosting and providing access to user-generated content.
Without the protections offered under the CDA, internet businesses could be sued and held liable for damages because of the actions of their users—whether it be content that is defamatory or libelous, sexually explicit or obscene, or harassing, harmful or violent.
Section 230 is separate from obligations under intellectual property law, such as the removal of copyrighted content or content that infringes on a company’s trademark, data privacy laws, and federal criminal laws.
The law forms a foundational component of the open internet that has allowed all kinds of social media platforms and websites to establish viable business models, largely driven by advertising revenues, or in some instances, subscription models.
In return, internet users have the ability to communicate and share information freely and openly, often restricted only by a platform’s content moderation policies.
Before Section 230 became law, liability was determined based on two questions: “Who is the publisher of the content?” and “Who is the distributor of the content?”
If a bookstore carries a book that contains illegal information, such as content that violates local obscenity laws, the bookstore cannot be held liable without some specific proof that they are aware of the contents. Under Section 230, the same analysis expands to treat an internet platform, such as YouTube, like that bookstore.
It’s worth noting Section 230 includes a requirement that internet platforms take good faith steps to remove or restrict content that is obscene, excessively violent, harassing, or similarly objectionable. This is where the development and implementation of content moderation policies become important for platforms.
The broadly interpreted Section 230 we have today didn’t just happen overnight. The protections are the result of decades of legal challenges to Section 230’s applicability to the ever-evolving and rapidly changing landscape of internet service offerings. These court decisions put growing internet platforms on the defensive.
Why is Section 230 important for creators?
Given the rapidly accelerated growth of the creator economy in recent years, creator-run businesses and brands are now moving to the forefront of discussions and policy-making when it comes to how law and policy impact an open internet.
“What eBay and Etsy did for small businesses / entrepreneurs ten years ago, influencers are doing now,” Hannah Poteat, an internet and privacy attorney, explained in a tweet. “When new laws target social media or content moderation or privacy, it’s those small businesses who feel [outsized] impacts.”
“Creators should care about Section 230 because it empowers platforms to host and engage with their content,” Jess Miers, legal advocacy counsel at the Chamber of Progress, told Passionfruit. “This in turn creates an open and free space for freedom of expression and innovation online.”
In recent years, Section 230 has become the target of attempts—often politically driven, yet with increasingly bipartisan support—to scale back and restrict the broad applicability of the liability shield.
Arguments in favor of this often point to the dramatically larger size and scale of most internet businesses, as compared to when the CDA became law, and the arguably monopolistic power technology giants hold over content distribution and reach.
“Without Section 230,” Miers said, “websites would be discouraged from hosting user-created content which would mean fewer opportunities for creators to share and monetize their expression.”
What do algorithms have to do with Section 230?
In recent years, the question has come about as to whether or not Section 230 protections extend to platforms that use recommendation algorithms and similar automated technologies to enhance a user experience in light of the massive amounts of content uploaded and shared on platforms every hour.
As previously covered by Passionfruit, Section 230 was brought up during the TikTok congressional hearing on March 23, 2023, where lawmakers questioned CEO Shou Chew over the company’s data use and privacy practices for its 150 million American users. Lawmakers were particularly interested in the company’s use of algorithms to serve content to users, as well as prevent the spread of misinformation and other sensitive or harmful material.
In an ongoing case known as the Gonzalez v. Google case, the U.S. Supreme Court is also grappling with the issue of algorithmic content recommendations and the extent to which platforms are taking actions outside the scope of Section 230 protections. During oral arguments in February 2023, the Justices seemed to be unwilling to draw a direct line of liability between a platform and its users simply because an algorithm is used.
In the case, 18 creators and the Author’s Alliance filed an amicus brief—a type of court filing that anyone can submit to a court as a “friend”—to argue “Section 230 has helped make it possible for anyone, anywhere in the world to launch and grow a business and build an audience online.” The brief was prepared as part of the Digital Entrepreneur Project, a non-profit initiative that aims to develop and promote policy that supports entrepreneurs and startups.
“[Platforms] might be less likely to host and promote independent creators’ content,” the creators explain in the brief. “New and emerging creators may be unlikely to reach new audiences. And speech generally could be chilled online, hindering Congress’ policy goals of fostering a free and open Internet.”
The use of algorithms and the competition among platforms to retain viewer attention has attracted numerous legislative attention for potential reforms. In April 2021, representatives from Facebook, Twitter, and YouTube appeared before a Senate committee to answer questions about their use of algorithms to curate and moderate content. In a March 2023 House congressional hearing regarding TikTok, the Restrict Act and the Data Act were both proposed, which once passed by Congress would allow the decision of a TikTok ban and data sharing restrictions to be made by President Biden.
Additionally, such activity has been the focus of news reports, studies, and even whistleblowers, claiming the platforms are doing more harm to society than good.
An open question is whether it’s acceptable for a platform to get involved in manually overriding the work of its algorithms. For example, TikTok confirmed employees have access to a so-called “heating” button that can override the algorithm and promote videos across all user “For You” pages. In the early days of YouTube, so-called “cool hunters” were responsible for the manual curation of its home page.
For creators, any policy developments on how, when, and why platforms can use algorithms may help break down once secretive, black-box technologies that impact how content is served to other users. In more extreme scenarios, it could limit your access to a platform, as is proposed in proposed legislation that could ban TikTok in the U.S.
What can creators do to prepare for policy changes regarding Section 230?
As previously covered by Passionfruit, our access to a platform or our understanding of an algorithm can quickly shift with the passing of policy changes. Until a decision is reached, creators can take proactive steps to ensure the communities they built on certain platforms aren’t entirely lost.
Creators can consider building an established presence on multiple platforms. They can also study the impact of India’s TikTok ban in 2020 on creators, which largely led to an explosion of Instagram and YouTube’s short-form video offerings.
Creators can also be mindful of reminding their audience of the risk to their community should a social media ban become a reality. Creators can use link-in-bio tools like Linktree or Koji to easily send viewers to one place that presents a range of alternative options for connection.
Finally, it’s worth considering shifting some of a creator’s audience to a platform they can control, such as an email list or website. However, such an option may vary depending on the type of content creators focus on and the ways in which they engage with their community.