Debate about S. 230, Freedom of Speech, and Content Moderation Needs
Updated: Mar 29, 2021
Policy Brief submitted by Georgia Evans

Problem
The problems proliferating on the Internet – hate speech,(1) as well as mis- and disinformation (2) – have prompted digital platforms to moderate content to combat these negative issues. Platforms’ efforts have thrust content moderation, freedom of speech, and S. 230 of the American Communications Decency Act (CDA), into the centre of public debate. This debate about content moderation and freedom of speech reflects the fundamental tension between the global nature of the Internet and national legal jurisdictions.
Background
The attention economy has created an algorithmic culture in which artificial intelligence (AI) monitors users’ data to maximize revenue and engagement. This data gathering results in the typical user’s online experience reflecting their interests and beliefs- or what AI envisions those interests and beliefs to be. Misinformation, disinformation, and hate speech thrive in this environment.
In 2016, the United States election was impacted by the extreme prevalence of misinformation on Facebook(3). Hate speech and violent extremist language have led to the incitement of violence outside of the digital sphere.
Due to the issues social media giants have caused, there has been extreme pressure on them to perform wide-scale content moderation to remove instances of hate speech, mis and disinformation that proliferate on these platforms.
In May 2020, a tweet by President Donald Trump was flagged for ‘glorifying violence’ (4). He then penned an executive order against S. 230 of the CDA “to prevent online censorship.”
What is Section 230?
S. 230 of the CDA, was ratified by the Clinton administration in 1996. There have been several legislative attempts to repeal and reform S.230 since its inception, which have drawn significant criticism from the Internet community (5).
It says that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider(6).” This law establishes that intermediaries that host or republish speech are protected against a number of laws that could otherwise penalize them.
Intermediaries with liability protections include internet service providers and any online service that publishes third-party content, including social media platforms like Facebook and Twitter. S. 230 has enabled the Internet to thrive on user-generated content.
S. 230 gives platforms immunity from liability if they remove material that is “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such content is constitutionally protected” if they do so in good faith (7).
S. 230 was initially created in 1996, when the Internet reached only millions of people, rather than 4.5 billion.
S.230 was created to protect freedom of speech and inspire innovation on the Internet.
Jurisdictional Tensions and Challenges of the Global Network
The Internet, put simply, is a global network of networks. No one person has control; its decentralized and distributed nature ensures its stability and resilience.
Managing globally available content is incredibly difficult considering the diversity of laws and norms that can be applied to the Internet (8). For example, Canada has a definition of hate speech, but many states do not, so how can a platform of two billion users eradicate hate speech?
Policymakers have the potential to implement Internet legislation that ends up affecting the user experience across the globe, such as the European General Data Protection Regulation.
If President Trump were to succeed in amending or repealing S.230, it would impact Internet users beyond the United States.
Any policies or policy standards that seek to fight hate speech, mis- and disinformation needs to preserve the global character of the Internet to mitigate the risk of the ‘Splinternet’ (9).
Proposed Solutions to Balancing Content Moderation Needs and Freedom of Speech
President Trump’s Executive Order seeks to narrow the scope of S. 230 in order to combat content moderation efforts on platforms like Twitter and Facebook (10). Congress, however, is the body that has control over that section of the Telecommunications Act and therefore the power to make changes.
Narrowing the scope of liability protections found in S. 230 or removing those liability protections would result in the further censorship and removal or posts from the Internet, not less.
Germany’s 2017 Network Enforcement Act (NetzDG) issues fines on digital platforms with two million users that fail to remove illegal content, including hate speech (11). NetzDG is a legislative attempt to ensure that platforms such as Facebook take more responsibility for harmful content. Many are uncertain about its effects.
Many scholars stress the need for an approach that protects human rights and respects the rule of law, rather than enabling companies to dictate the norms that regulate speech.
For example, Niva Elkin-Koren, the Founding Director of the Haifa Center for Law & Technology, argues that a law-based approach to content moderation can be used to restore aspects of the common good to the digital public square(12).
Notes
(1) While there is no widely agreed-upon definition of hate speech, Facebook defines it as a ‘direct attack on people based on ... protected characteristics – race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability.’ (Facebook, “Community Standards: 12. Hate Speech,” Accessed 15 October 2020, https://www.facebook.com/communitystandards/hate_speech)
(2) Misinformation can be defined as false or inaccurate information that is spread unintentionally, whereas disinformation is false or inaccurate information that is deliberately spread to influence public opinion and obscure the truth.
(3) Axel Oehmichen et al., “Not All Lies Are Equal. A Study Into the Engineering of Political Misinfomration in the 2016 US Presidential Election,” IEEE Access, volume 7, 2019.
(4) Matthew Yglesias, “Twitter flags Trump for ‘glorifying violence’ in ‘looting starts, shooting starts’ tweet,” Vox, 29 May 2020, https://www.vox.com/2020/5/29/21274359/trump-tweet-minneapolis-glorifying-violence
(5) Katie Jordan, “The Internet Is Built on ‘Intermediaries’– They Should Be Protected,” Internet Society, 02 October 2020, https://www.internetsociety.org/blog/2020/10/the-internet-is-built-on-intermediaries-they-should-be protected/
(6) Communications Decency Act, 47 U.S.C. § 230.
(7) Ibid.
(8) Rolf H. Weber, “A Legal Lens into Internet Governance,” in Laura DeNardis et al., ed. Researching Internet Governance: Methods, Frameworks, Futures, Cambridge: MIT Press, p. 105.
(9) Bertrand de la Chapelle and Paul Fehlinger, “Jurisdiction on the Internet: From Legal Arms Race to Transnational Cooperation,” Internet & Jurisdiction, 2016, p. 24.
(10) “Executive Order of May 28, 2020, on Preventing Online Censorship,” 47 U.S.C. 230(c), 2020. https://www.whitehouse.gov/presidential-actions/executive-order-preventing-online-censorship/
(11) Amélie Heldt, “Germany is amending its online speech act NetzDG… but not only that,” Internet Policy Review, 06 April 2020, https://policyreview.info/articles/news/germany-amending-its-online-speech-act-netzdg-not only/1464
(12) Niva Elkin-Koren, “Contesting algorithms: Restoring the public interest in content filtering by artificial intelligence,” Big Data and Society, volume 7, issue 2, p. 1-13.