top of page

Down the Stack: Power and Accountability in Internet Intermediaries’ Content Moderation Decisions

Opinion by Georgia Evans. This piece is part of Digital Hate, a series by Alexandra Wilson exploring the rise of right-wing extremism online.

Shielded from liability, internet intermediaries have long served as ‘neutral’ means for people to interact and express themselves online. Internet intermediaries facilitate the use of the internet and its communication. Intermediaries include internet service providers, cloud services, web hosting companies, search engines, social media platforms, financial payment services, and more [1]. While these companies are regulated by various bodies, there has been a recent push for them to become more legislated as the digitization of society progresses and problems online transcend offline.

Since the 2016 presidential election, citizens and governments around the world have come to grapple with the fact that what happens online does not remain in a digital vacuum. The words that live online have effects in the physical world. Right-wing extremism, once cast aside as just being a part of the incel culture on 4chan, Reddit and the like [2], has turned into real-world violence time and time again. This salient problem in society has led people to turn to internet intermediaries to moderate online content in order to prevent violence and societal breakdown by removing harmful content.

The most natural place to look for content moderation is platforms. Content moderation is the “detection of, assessment of, and interventions taken on content or behaviour deemed unacceptable by platforms or other information intermediaries” [3]. Facebook is often the focal point of content moderation debates because of its sheer volume of users – approximately 2.85 billion worldwide as of the first quarter of 2021 [4] – and its role in spreading mis- and disinformation globally. Discussions of platform governance and accountability have captured civil society and governments alike.

While we often analyze content moderation as what occurs on, by, or to a platform, content moderation performed by internet intermediaries at deeper levels of the internet stack merits more attention than currently received. These players yield significant power with little oversight, and their actions against harmful content vary in effectiveness and magnitude.

The tech stack can be understood through a variety of models, such as the Open Standards Interconnection (OSI) model, the traditional Internet protocol suite, and even models set out in legislation like the EU Digital Services Act. As the internet grows, more and more intermediaries join the stack and change its structure. At the top of the stack is the application layer. These are the platforms and websites, like Facebook, Twitter, and the news, that internet users interact with most frequently [5]. The EU Digital Services Act (DSA) distinguishes ‘online platforms’ from ‘very large online platforms’ to ensure that requirements for Facebook and other digital behemoths are not unfairly applied to small and medium sized digital enterprises and platforms [6]. Most content moderation debates are about what happens on this layer. Platforms most commonly moderate harmful content through de-amplification, flagging, and removal. Actions at the application layer have the highest level of accuracy and the lowest chance of implicating legitimate speech. The further down in the stack a company is, the more extreme their options are when combatting harmful content and the higher the chance that lawful content will be implicated.

Beneath the players operating at the application layer are web hosting providers and cloud services. Website hosting providers include companies like Shopify and Wordpress, where people pay money for their website to be ‘hosted’ on their platform [7]. Hosting providers can remove content from websites, as well as remove the website itself. Cloud service providers include Amazon Web Services (AWS) and Microsoft Azure. People and companies store and process their website and data through these providers [8]. Cloud service providers can stop hosting and processing the data of a company, requiring them to find another provider.

The EU DSA also legislates “intermediaries offering network infrastructure,” including content delivery networks (CDNs), domain name registries and registrars, and internet service providers [9]. In the stack, these players operate below the web hosting and cloud services. CDNs are distributed platforms of servers that help to speed up the load times of website content by reducing the physical distance between the server and the user. For example, if a user in Canada wanted to access a website in Australia, the data of the Australian website would have to travel along the internet cables to Canada and would take a very long time to load. A CDN would cut down the load time of the Australian website by caching a pre-saved version of this web content in a server closer to, or even in, Canada [10]. CDNs are run by Akamai, Cloudflare, Amazon, and more. When a CDN stops providing its services to a company, their websites can become completely unavailable. The CDN market is dominated by Akamai, Cloudflare and Amazon – when these players stop providing their services, it is hard to find suitable alternatives and individuals and companies can end up not being able to be online again.

Domain registrars and registries run the domain name system (DNS), the telephone book of the Internet. Their activities make sure that when you type in a domain, like, you get to that website. There are several actions domain registrars and registries can take to suspend and delete harmful domain names, however, the magnitude of these actions is wide yet ineffective. Suspending or deleting a domain name would be like cutting off the road because one home has something illegal inside. Websites can still exist, however, in absence of a domain name. Users could access the harmful content through an IP address or by using a Virtual Private Network. Finally, below domain registrars and registries are the internet service providers, like Bell, Rogers and TekSavvy, that provide the physical internet connection for people to access the network of networks. ISPs can filter harmful content so that certain websites are not accessible to their customers.

The nature of the open internet means that intermediaries are both interoperable and interconnected. This means, however, that the action of one player on one layer can affect another player on another layer. For example, on June 8th, 2021, CDN provider Fastly experienced an outage, which resulted in websites like Reddit, PayPal and United Kingdom government websites to be unavailable [11]. This shows how these players have significant power that is most visible when problems on the internet arise.

Dr. Corinne Cath-Speth from the Oxford Internet Institute argues that internet intermediaries, despite touting their services as being a “neutral mere conduit” for communication, act as political gatekeepers in the internet age [12]. Since they are minimally legislated and rarely liable, they rely on industry standards and their own company policies to make decisions when, for example, the activities that rely on their services have led to physical ideologically motivated violence. The deeper into the stack a company is situated, the less precise and more extreme their actions against harmful content are. As well, the deeper into the stack a company is situated, the less likely they are to receive attention for their actions compared to, say, Facebook’s decision to suspend Trump’s account.

In 2018, Microsoft’s Azure cloud service suspended accounts for the social network Gab, a platform associated with white supremacy that had a “laissez-faire moderation stance” after a shooting in a Pittsburgh synagogue [13]. Earlier this year, following the U.S. Capitol Riot, AWS removed Parler from its services. Both Gab and Parler would have had to find other cloud services to host their content in order to remain operable. In 2019, Cloudflare, a large CDN, stopped providing its services to 8chan, a forum that was well known for hosting unmonitored message boards that were home to ideologically motivated violent extremist messaging. Cloudflare dropped 8chan after the El Paso shooting that killed 20 and wounded 26 people, with its CEO calling the forum “a cesspool of hate” [14]. While something had to be done about the content of these platforms, there remains the question of how much legitimate speech was implicated by Microsoft, Amazon, and Cloudflare’s decisions to cease their services.

While it appears commendable for these companies to act on instances of far-right violence, “unaccountable interventions made by individual companies and technical organizations are a really unstable foundation for conversations about human rights online” [15]. Beginning a conversation about human rights online only in instances of life and death is problematic. The actions taken by companies below the application layer do not receive as much scrutiny as they should. According to Dr. Cath-Speth, an accountability framework for infrastructural players should be developed so that society is not subjected to the whims of players who view themselves as “neutral” and therefore only intervene when they view the public interest in their intervention as dire. Any accountability framework must take a proportional approach to content moderation and prioritize action closest to the source of the harm in order to be effective. Needless to say, content moderation at the infrastructural level is a debate that will likely unfold for several years as governments grapple with how best to regulate technology.

  1. OECD, “Internet Intermediaries: Definitions, economic models and role in the value chain,” in The Role of Internet Intermediaries in Advancing Public Policy Objectives, OECD Publishing, Paris,

  2. Zack Beauchamp, “Our incel problem,” Vox, April 23, 2019,

  3. Tarleton Gillespie et al., “Expanding the debate about content moderation: scholarly research agendas for the coming policy debates,” Internet Policy Review, vol. 9, Issue 4,

  4. H. Tankovska, “Facebook: number of monthly active users worldwide 2008-2021,” Statista,

  5. Cloudflare, “What is the OSI Model?” Cloudflare, Date Accessed June 21, 2021,

  6. European Commission, “The Digital Services Act: ensuring a safe and accountable online environment,” European Commission, Date Accessed June 23, 2021,

  7. Geoffrey A. Fowler & Chris Alcantra, “Gatekeepers: These tech firms control what’s allowed online,” The Washington Post, March 24, 2021,

  8. Ibid

  9. European Commission, Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on a Single Market for Digital Services (Digital Services Act) and amending Directive 2000/31/EC.

  10. Akamai, “What does CDN stand for? CDN Definition,” Akamai, Date Accessed June 23, 2021.

  11. Dr. Corinne Cath-Speth, “The Internet’s Reluctant Sheriffs: Content moderation and political gatekeeping through Internet infrastructure,” [Video] Oxford Internet Institute, June 16, 2021,

  12. Ibid

  13. Geoffrey A. Fowler & Chris Alcantra, “Gatekeepers,”

  14. Tim Elfrink, “’A cesspool of hate’: U.S. web firm drops 8chan after El Paso shooting,” The Washington Post, August 05, 2019,

  15. Dr. Corinne Cath-Speth, “The Internet’s Reluctant Sheriffs”

bottom of page