Differential Diagnosis in Online Regulation
Reframing Canada’s “Systems-Based” Approach

Abstract

In February 2024, following Germany’s “Netzwerkdurchsetzungsgesetz”, the European Union’s Digital Services Act, and the United Kingdom’s Online Safety Act, Canada exploited its “second mover” regulatory status by introducing its long-awaited Bill C-63. Through its Online Harms Act and related amendments, it proposed an innovative “systems-based risk assessment” model for regulating harmful online content. In this article, the authors argue that any truly “systems-based” approach will benefit from regulatory insights and prescriptions informed by the following two interdisciplinary sources. First, both constitutional and media law scholars endorse stepping outside conventional regulatory models by employing more “context-based” or holistic approaches—a regulatory turn seemingly consistent with Canada’s pivot towards an innovative “systems-based” model. Second, exploring further the synergies between law and medicine introduced in our previous Digital Iatrogenesis eucrim article, any enhanced framework aimed at “cracking the code” of digital media regulation will benefit from profound insights native to social medicine and diagnostic theory. Besides providing a convincing case for expanding aetiological (and regulatory) inquiry to include social and environmental factors, established principles of medical diagnosis provide a valuable decision-making protocol for present-day regulators. Taken together, leading regulatory and medico-diagnostic scholarship suggests that prevailing “systems-based” models—as epitomised by Canada’s proposed Online Harms Act—would appear to function as a “blueprint” for privatised government censorship, providing regulators with the legislative mandate, informational transparency, and compliance authority necessary for regulatory capture. As one of the Internet’s “Big Picture” dilemmas, these censorship concerns may yet require reassessment of Europe’s current regulatory framework.

I. Introduction

The Internet and social media have triggered a tectonic shift in our digital “global village”.1 Discourse production has moved onto a new online medium with a radically different structure and dynamic.2 Besides creating a revolutionary “participatory” communications model (i.e. shifting from a few-to-many to many-to-many dynamic),3 a key feature of our digital free speech infrastructure has been the emergence of a small group of powerful privately-owned digital intermediaries—the so-called “Big Five” (Google, Meta (formerly, Facebook), Amazon, Microsoft, and Apple)4—who not only effectively “own” and operate the Internet, but function as increasingly decisive arbiters of what information users access online, and what content ultimately reaches the public sphere.5 Generating unprecedented regulatory challenges, a combination of these influential “new governors”,6 an increasingly complex digital media infrastructure, and continuing technological advances not only creates tension with existing legal rules and principles,7 but gives rise to increasing lower-salience structural threats to democracy, manifesting in unprecedented global surveillance, manipulation, and control.8 Regardless of which of the two leading regulatory approaches is championed—viz., the European Union’s predominant “notice-and-action” model or America’s contrasting system of “market self-regulation”—conventional online regulations exhibit near-singular focus on restricting “problematic” online content (e.g. hate speech and misinformation), leaving the accelerating and more disquieting phenomena of mass surveillance and privatised government censorship unaddressed.9 As we have previously cautioned, without prioritising these structural threats, regulators—much like physicians—risk treating only the symptoms of our increasingly dysfunctional online public sphere, rather than grasping their aetiology of broader tensions, patterns, and interrelationships.10

A promising antidote to these growing regulatory challenges is Canada’s evolving “multi-stakeholder” approach, which has been marked by extensive public and expert consultations. Inspired at first by Germany’s popular “notice-and-takedown” model,11 following widespread criticism of likely encroachments on freedom of expression by Bill C-36 (Canada’s provisional hate speech legislation introduced in 2021), politicians quickly announced plans to go back to the proverbial “drawing board”.12 Mindful of the need for political and regulatory compromise, Canada’s minority Liberal government proceeded on the sensible expectation that future regulations would not be a straightforward “panacea”, but would comprise only “one piece of a bigger puzzle”.13 By avoiding a fixed timeframe for introducing their new and potentially more forward-thinking framework, Canada’s regulators vowed instead to take whatever time necessary to meet the challenge of “getting the legislation right”.14 On 26 February 2024, following earlier regulatory attempts by Germany, the European Union, and the United Kingdom, Canada exploited its apparent “second mover” status by finally introducing Bill C-63 which, through its Online Harms Act and related amendments, proposes an innovative “systems-based risk assessment” model for regulating harmful content online.

In this article, we argue that despite the Canadian government’s enthusiasm and lofty aspirations, any truly consultative or “systems-based” approach will benefit from regulatory insights and prescriptions informed by the following two interdisciplinary sources. First, the balance of authority of constitutional and media law scholars emphasises the necessity of stepping outside conventional regulatory models by employing more “context-based” and “systems thinking” approaches—a regulatory turn seemingly consistent with Canada’s pivot towards an innovative “systems-based” model. Second, any enhanced framework aimed at “cracking the code” of digital media regulation will benefit from profound insights native to the disciplines of social medicine and diagnostic theory. Besides providing a convincing case for expanding aetiological (and regulatory) inquiry to include the effects of social and environmental signals, established principles of medical diagnosis provide a valuable self-reflexive decision-making protocol for present-day regulators. Taken together, a careful review of “systems-inspired” regulatory scholarship and medico-diagnostic principles suggests that prevailing “systems-based” models—as epitomised by Canada’s proposed Online Harms Act—would appear to function as a “blueprint” for privatised government censorship,15 providing regulators with the legislative mandate, informational transparency, and compliance authority for regulatory capture that leading scholars have long understood as one of the Internet’s “Big Picture” regulatory dilemmas.16

In the end, just as earlier medical debates between germ theorists and proponents of social medicine exposed the importance of host responses and environmental cues to our knowledge of health and illness,17 contemporary tensions in the field of digital media regulation can shed much-needed light on the dangers of untreated structural threats to the discursive health of our global body politic.

II. Global Regulatory Approaches

Despite the original aim of cyber-libertarians to create an unfettered online environment, two predominant models of Internet regulation have emerged worldwide, reflecting fundamentally different schools of thought and approaches to freedom of expression.

1. “Notice-and-action” model (NetzDG/DSA)

Typified by Germany’s Network Enforcement Act (Netzwerkdurchsetzungsgesetz - NetzDG) and Europe’s Digital Services Act (DSA),18 the “notice-and-action” model is characterised by a relatively strict regulatory approach.19 This model limits digital platforms’ speech interests by obliging them to delete or block illegal online content within prescribed periods, ranging from 24 hours to seven days. Platforms must also provide an accessible and user-friendly complaints procedure for illegal online content, and are obliged to report potentially criminal content to law enforcement authorities.20 Importantly, systematic non-compliance leads to severe penalties.

Besides prompting extensive public and private co-optation, this regulatory model has suffered from ambiguous definitions of “illegal” online content: NetzDG, for example, references specific infractions in Germany’s Criminal Code, (e.g. insult and disturbances to the public peace), whereas the DSA introduces a significantly broader definition, not enumerating specific criminal provisions. This definitional ambiguity is ultimately left to digital platforms to resolve—a complex legal assessment that can cause broadly divergent results in each of the EU’s 27 Member States21—which places platforms in the unenviable role of powerful gatekeepers at the threshold of human rights.

2. “Market self-regulation” (USA)

Canonically associated with the United States of America, the “market self-regulation” model represents a fundamentally different approach to regulating online communications and is characterised by two essential elements. First, platforms are shielded from liability for speech torts committed on their platforms under section 230 of the Communications Decency Act (CDA). Second, the US Constitution provides an enlarged scope of protection for “offensive” speech under the First Amendment, including hate speech.22 In effect, “market self-regulation” allows platforms to determine—with minimal state interference and risk of liability—what content to carry and remove. Compared to the “notice-and-action” model, free-speech restrictions under “market self-regulation” are not imposed by government legislators, but by modifying platforms’ content moderation policies, or Terms of Use.

III. Canada’s “Systems-Based” Regulatory Proposal

Compared to the EU and America, Canada has embraced a novel “multi-stakeholder” approach to resetting its regulatory framework. In its consultative journey, the Canadian government has pivoted from conventional “notice-and-action” models to a more “systems-based” approach. By imposing a “duty to act responsibly” on digital platforms, Canada’s new Bill C-63 seeks to provide Canadian regulators with information and greater transparency about key ex ante and systemic decision-making processes taking place outside and upstream of more conventional models of ex post content review and error correction.

1. Moving from a “notice-and-takedown” to a “systems-based” model

Canada’s “multi-stakeholder” approach is notable for two particularities. Besides moving from a conventional “notice-and-takedown” to a more “systems-inspired” regulatory approach, Canadian legislators have shown a distinct preference for combating harmful online content rather than heeding and prioritising concerns expressed by the public and experts alike with rising censorship and more structural threats to democratic governance.

a) Public consultation – concerns with privatised government censorship

Following its abandonment of Bill C-36, the Canadian government began public consultations soliciting Canadians’ views on regulating harmful online content. From July to September 2021, the government requested written submissions from the public and tech-industry on its original “notice-and-takedown” regulatory model (i.e. Bill C-36), and associated technical and discussion papers.

While public respondents unanimously accepted the necessity of state intervention—as opposed to “market self-regulation”—far fewer supported the proposed legislative framework as a whole. Importantly, from the very beginning of Canada’s extensive regulatory planning, a broad cross-section of stakeholders expressed six main or “prominent” concerns on the dangers of censorship and the over-regulation of online content, relating to: (1) definitional clarity of harmful content; (2) proactive monitoring; (3) expedited takedown requirements (e.g. 24-hour rule); (4) economic drivers of platform content moderation; (5) bureaucratic overreach; and (6) transparency and accountability reporting duties.

First, respondents criticised the lack of definitional detail for online harms, warning that overly broad definitions would invite bias and could have a chilling effect that might “create a broader trend toward over-censorship of lawful expression writ large”.23 Second, quite aside from its present-day reality, stakeholders expressed concern that a general proactive monitoring obligation on platforms would be extremely problematic as it would “likely […] amount to pre-publication censorship”, and ultimately “operate as a de facto system of prior restraint”.24 Third, many respondents called for removing the 24-hour takedown rule borrowed from Germany’s NetzDG, arguing that “it would incentivize platforms to be over-vigilant and over-remove content […]”.25 Fourth, multiple respondents keenly observed that rather than focus exclusively on content moderation, regulators should target “the economic factors that drive platform design and corporate decision making”,26 including other “[…] structural factors like advertising practices, user surveillance, and algorithmic transparency […]”.27 Fifth, despite the overall enthusiasm for urgent regulatory intervention, stakeholders questioned “the number of regulatory entities, emphasizing potential overlaps in authority and the sheer size of the proposed bureaucratic structure dedicated to ‘censoring’ online expression”.28 Sixth and finally, moderate concern was expressed about transparency and accountability requirements. As one of the most powerful governance tools, respondents hoped that mandated and audited transparency could operate as “important safeguards to mitigating the regime’s potential for over-removal and censorship”.29

b) Expert consultation – pivoting to a “systems-based” regulatory approach

The second phase of Canada’s “multi-stakeholder” approach involved the solicitation of expert advice. In March 2022, an Expert Advisory Group on Online Safety (EAG) was convened composed of Canadian experts in platform governance and content regulation, online harms, civil liberties, informatics, and national security. Its dual mandate was to provide insights and recommendations on how best to design a legislative and regulatory framework to address harmful online content, and to advise on “how to best incorporate the feedback received during the national consultation […]”.30 Like ordinary Canadians, the EAG endorsed state regulation, proclaiming that online safety “cannot be left to the good graces of industry players”.31

Remarkably, while two of the five censorship concerns voiced in the public consultation were taken up by the EAG (i.e. definitional clarity and proactive monitoring), the remaining three worries were effectively downplayed or disregarded. While expert comment was anonymised by the government, the issue of generalised or proactive platform monitoring was mentioned repeatedly in two of the ten EAG workshops. When advising on the appropriate types of regulatory content, multiple experts worried that “whatever framework is chosen, it would be critically important that it not incentivize a general system of monitoring”.32 When experts turned their minds to evaluating the new regulatory approach under consideration, some stressed that “there is a risk that a systems-based approach could indirectly promote a system of general monitoring”, advising that “each legislative provision must be scrutinized to ensure no general monitoring obligation exists […]”.33 Moreover, besides confirming earlier concerns with definitional uncertainties regarding harmful content,34 the EAG expanded these to include the new framework’s proposed “duty to act responsibly”.35 Experts cautioned that “if regulated services are not told how to comply with their duty to act responsibly, the systems they put in place might be rudimentary and result in blunt over-regulation […]”.36

Notwithstanding other minor and less-specific references to freedom of expression and government censorship, the EAG took particular interest in regulating disinformation, with most experts agreeing that “the Government cannot be in the business of deciding what is true or false online, or of determining intent behind creating or spreading false information”.37 In a statement reminding Canadians of the grave dangers of regulatory capture, most EAG members insisted categorically that “the Government [cannot] censor content based on its veracity, no matter how harmful”.38 Finally, unlike the more critical and far-reaching citizen concerns with the economic drivers of online censorship—which was more amenable (at least in theory) to acknowledging the economic foundations of over-filtering and over-blocking—some members of the EAG highlighted the importance of only the financial and economic drivers of disinformation. Apparently unwilling or reluctant to contemplate the relationship between economic motives and online censorship, these experts nonetheless suggested that successful answers to disinformation may lie beyond regulatory reach if advertising law and practices were not altered to effectively “demonetize disinformation”.39

At last, apart from these relatively few and abridged regulatory concerns, previously vetted worries with rising regulatory capture and privatised government censorship did not appear to resonate as strongly with Canada’s expert panel.

c) Citizens’ assemblies on democratic expression and national roundtable discussions

The final phases in Canada’s lengthy consultative process involved important input from the Canadian Commission on Democratic Expression and the Department of Canadian Heritage, which provided vital feedback on the EAG and the state of regulatory input to date. Importantly, as with initial public consultations, significant concerns were again expressed about the dangers of censorship and avoiding over-regulation of speech interests.

aa) “Capstone” assembly on democratic expression – protecting dissenting opinions

Following the EAG’s counsels on how best to design a regulatory framework for addressing harmful online content, Canadian Heritage requested a third and final Citizens’ Assembly on Democratic Expression to review and respond to the EAG’s suggestions and all work that had preceded its input and efforts. At stake in the minds of many members of this “capstone” assembly was nothing less than the future of Canadian democracy.40

Although reflecting the emerging consensus on the urgent need for state regulation, this second public consultation again acknowledged the vital importance of avoiding censorship and over-regulation of free expression. First, Assembly members expressed concern that “online users […] be able to share dissenting or unpopular opinions”,41 and that any risk-based model contain appropriately “strong whistle-blower protections”.42 Second, comparable to feedback from the first public consultation in 2021, Assembly members pointed out the detrimental economic implications and overall costs of digital platforms’ business models and over-reliance upon click-through ads in our digital “attention economy”, warning that platforms’ overriding “goal of profit from advertising sales comes at a detrimental cost, and with great disregard, to the well-being of our society”.43

bb) National roundtable discussions – misapprehending economic regulatory motives

Finally, in July 2022—shortly after the EAG completed its work—the Canadian government conducted 19 nationwide roundtables to incorporate victim and platform perspectives on the EAG’s advice and recommendations.44

As confirmed throughout the consultative process, consensus was again reached over the urgent need for state regulation of harmful online content. Still, evidencing an overall concomitant fading of concern with censorship and over-regulation, participant feedback was limited to passing references to the dangers of government involvement in regulating disinformation, and the regulatory implications of platforms’ business models. Echoing the EAG’s insistence that the government cannot be deciding what is “true” or “false” online, roundtable participants were greatly uneased “at the notion that the government should be the entity deciding what material constitutes misinformation and disinformation”.45 Importantly, this feedback provided yet more evidence of persisting confusion over the scope of effects of economic factors on content moderation. Many participants expressed concern only about their impact on delaying removal of harmful online content, voicing scepticism over “the willingness of social media platforms to self-regulate content […] due to the site traffic and revenue the content can generate”, and “platforms prioritizing profits rather than monitoring content […]”.46 Besides implicitly endorsing proactive monitoring, overlooked again was the impact of economic drivers on over-filtering and over-blocking, and the more veiled dangers of privatised government censorship.

2. Bill C-63: Canada’s latest regulatory framework

On 26 February 2024, Canada introduced Bill C-6347—its long-awaited regulatory framework for addressing harmful online content. Besides amending (among others) the Criminal Code and the Canadian Human Rights Act (CHRA), Bill C-63 introduced the Online Harms Act, intended to make good on its earlier promise to Canadians of “getting the legislation right”.

Besides imposing sensible duties to protect children and to make non-consensually distributed intimate images and child pornography inaccessible in Canada within 24 hours, the Online Harms Act imposes on digital platforms a “duty to act responsibly” by implementing measures to mitigate the risks that users will be exposed to harmful content. This negligence-based duty requires (above all) that platforms submit regular Digital Safety Plans—containing detailed risk assessments, mitigation strategies, and evaluations of their efficacy—to a newly established Digital Safety Commission of Canada, whose mandate would be administering and enforcing the Act. Besides this governing regulatory body, the proposed Act also establishes a Digital Safety Ombudsperson to support users of regulated services and to advocate for the public respecting systemic online safety issues, and a Digital Safety Office of Canada to provide administrative support to the two newly-created agencies.

Consistent with Canada’s regulatory focus on combating harmful online content, Bill C-63 includes three vital harm-related provisions. First, the Online Harms Act adds two additional categories of harm (i.e. child bullying and self-harm) to the following five categories discussed throughout Canada’s consultative process, namely: (1) content that sexually victimises a child or revictimises a survivor; (2) intimate content communicated without consent; (3) content that foments hatred; (4) content that incites violent extremism or terrorism; and (5) content that incites violence. Second, Bill C-63 amends the Criminal Code by: (1) proposing a long-awaited definition of “hatred”; (2) creating a controversial standalone “hate crime” offence (liable to imprisonment for life) that applies to existing criminal offences and parliamentary acts motivated by hatred;48 (3) increasing penalties for existing hate crimes; and (4) instituting a new “peace bond” designed to prevent the commission of hate crimes and offences. Third and finally, Bill C-63 aims to reinstate Section 13 of the CHRA to make it a discriminating practice “[…] to communicate or cause to be communicated hate speech by means of the Internet or any other means of telecommunication […]”,49 thereby broadening the scope of remedies for victims of online harm.

In the end, notwithstanding the broad range of public and expert concern voiced over the dangers of censorship and over-regulation of speech interests during its extended consultation process, Canadian legislators appear to have focused disproportionally on harmful content at the expense of addressing lower-salience structural threats to democratic governance.

IV. Differential Diagnosis in Online Governance

After introducing Canada’s new “systems-based” framework, Part IV demonstrates that reliable indications as to its optimal form and content can be discerned from two key interdisciplinary sources: (1) constitutional and media law scholarship emphasising the necessity of employing “context-based” and “systems thinking” approaches to online regulation; and (2) profound regulatory insights native to the fields of social medicine and diagnostic theory. Taken together, these confirm that future regulatory models must openly embrace synthetic enquiry and careful avoidance of overly-reductionist approaches to online dysfunctions.

1. “Systems thinking”: Stepping outside conventional regulatory models

The nature and limitations of Canada’s “systems-based” model can be first gathered from leading constitutional and media law scholars who collectively endorse: a) adopting more structurally sophisticated means of integrating socio-technical-legal elements into regulatory theory and design; b) adopting novel “context-based” approaches to digital platform liability; and c) reframing content moderation in terms of “systems thinking”. Despite developing such insights within relatively narrow fields of reference, these scholarly efforts assist greatly in envisioning an integrative perspective on online regulation.

a) Multi-ordinal mapping of digital information flow

One of the most challenging aspects of ongoing technological advances in cyberspace has been reconciling their disruptive regulatory effects (and failures), and identifying the details and guiding principles for an effective global framework of Internet governance.50 Central to this aim has been confronting the “shaky” theoretical grounds underlying current regulatory structures and—given the Internet’s clash with the principle of territoriality—embedding technological advances into an effective global system.51 Despite a lack of consensus about the conceptual grounds of online regulation, scholars have agreed on an important feature about its structural complexity. Reflecting hard-won lessons of legislators worldwide, commentators insist that “a single concept cannot explain the complex structure of cyberspace” and hence resort to some form of “systems-inspired” or “interrelated thinking seems unavoidable”.52

aa) Murray’s three-dimensional “complexity matrix”

An important early contribution to defining possible future perspectives on Internet governance was provided by Andrew Murray.53 Writing in an earlier online era focused on optimising digital information flow, Murray’s principal insight was that cyberspace is a complex, even chaotic, environment that requires legislators to employ a “[…] more cohesive, measured, prudent and non-interventionist approach”.54 Distinguishing his pioneering regulatory theory from earlier “cyber-libertarian” and “cyberpaternalist” models, Murray’s “complexity thesis” rejected their joint assumption of a static regulatory setting by endorsing a more dynamic model capturing the complexities of State and private sector actors. Murray advised that by recognising parties’ dual roles as “regulator” and “regulatee”—and adopting a more dynamic “systems-inspired” view of the regulatory environment—legislators “[…] are offered the opportunity to produce effective complimentary regulation”.55 Accordingly, in his bid to minimise disruption and to harmonise regulatory efforts with policy outcomes—both aims resonant with autopoiesis theory—Murray’s contrasting model of “symbiotic regulation” endorsed a distinctive protocol harnessing the complex relationships between the various regulatory actors.56

Inspired by these biological and remedial concepts, Murray introduced a novel three-dimensional matrix for structuring and regulating complex, digital media environments.57 According to Murray, successful online regulation requires that the complexity of the broader media environment be accurately mapped, including the communications networks already in place.58 Recognising that “all actors in the regulatory environment play an active role […]”,59 interventions in such complex networked systems are fundamentally indeterminate in that “[…] the complexity of the matrix means that it is impossible to predict the response of any other point […]”.60 This however does not mean that cyberspace is fundamentally unregulable. Quite the contrary. Owing to the overall “malleability of its environment”,61 Murray insisted that our online environment is highly amenable to regulation using a reflexive three-step process.

The first step is to produce a dynamic model of the regulatory environment, being careful to record all relevant parties and to map their primary communication dynamics. The focus is not on capturing actual content, but on mapping the relationships between actors well enough to “anticipate the regulatory tensions that are likely to arise […]”.62 Second, based on the accuracy and comprehensiveness of this initial environmental modeling, regulatory interventions can be optimally formulated to anticipate and avoid regulatory tensions between its main actors, thereby offering a positive communication “to the subsystems, or nodes, within the matrix […]”.63 Murray further specified that these regulatory interventions are “intended to harness[] the natural communications flow by offering to the subsystems, or nodes […] a positive communication that encourages them to support the regulatory intervention”.64 Third, regulatory interventions must then be tested by monitoring positive and negative nodular feedback. According to Murray, whether aiming to reinforce already successful regulations, or to engender modifications directed at enhancing deficient regulatory outcomes, “[…] regulator[s] should be prepared in light of this feedback to make alterations in their position and to continue to monitor feedback on each change […]”.65 By following this three-stage process regulators are, according to Murray, best equipped “to design successful […] interventions in the most complex regulatory environment”.66

At last, while criticised as being “difficult to implement” and “[…] impossible to adequately carry out”,67 Murray’s “complexity thesis” nonetheless remains a vital early contribution to confronting rising challenges of regulating complex networked environments.

b) “Context-based” approaches to regulating platform liability

A second indication as to the nature and limitations of Canada’s “systems-based” regulatory model can be gathered from examining the underlying bases of platform liability. Several forward-thinking scholars have endorsed a broad array of “context-based” models.

aa) Lavi’s “descriptive social technological” model

A significant early contribution to online regulatory theory and design in the social media era was Michal Lavi’s innovative “context-based” model.68 Aiming to reconcile tensions between prevailing legal rules and the attribution of liability for online speech torts, Lavi noted presciently that our modern-day digital media ecology places the right to free expression and its underlying justifications decidedly in “a new light”.69 Concerned particularly about the “chilling effect” of holding content providers liable for speech torts committed on their platforms, Lavi cautioned that a single, overarching regulatory approach would be “insensitive to different online contexts and lead to distortions and improper [regulatory] consequences”.70

In response, Lavi endorsed an innovative “descriptive social technological” model erected on a three-level conceptual taxonomy for matching liability rules to an overarching sociological criterion that measures the strength of social ties and their potential for causing harm. By dividing digital platforms into three categories with increasingly strong social ties—(1) “freestyle platforms” (e.g. Yahoo! Message board); (2) “peer production platforms” (e.g. Yelp and other user review sites); and (3) “deliberation and structuring communities” (e.g. Meta (formerly Facebook, X (formerly Twitter), and other social networks)—in simplest terms, Lavi proposed a model of “differential liability regimes”,71 arguing that since platforms’ various technical and functional capabilities influence speech-related harms differently, liability should increase concomitantly with each platform’s potential for doing so. That is to say, whenever the severity of harm is low and there is a substantial likelihood for private ordering, legal regulations are unnecessary. But where the social media context increases harm to external victims and results in a failure of private ordering, content providers should not be granted legal immunity (e.g. under section 230 CDA), and should be subject to some form of “notice-and-takedown” procedure.72 Consistent with earlier warnings against the impracticality of Murray’s “complexity thesis”, Lavi advised that her regulatory model—along with “context-based” approaches generally—might provide courts and legislators with a more practical alternative—“[…] a simple rule of thumb for defining content providers’ scope of liability”.73

Importantly, the regulatory implications of Lavi’s “context-based” model extend well beyond issues of doctrinal coherence. Reiterating concerns of lower-salience structural threats to democracy advanced by leading free speech scholars like Jack Balkin,74 Lavi stressed that the fundamental motive for platform content moderation is “economic and not driven by legal considerations”.75 This point is critically important not only for “optimally balancing” competing policy rationales underlying platform liability, but to identifying the “root causes” of over-filtering, over-blocking, and acknowledging the potential for and dangers of privatised government censorship—structural concerns vital both to the maintenance of a healthy marketplace of ideas, and for effectively holding power to account.76

At last, besides the utility of Lavi’s model for ensuring doctrinal coherence and reform, it also attests to the regulatory dangers of ignoring the discomfiting reality that the “economic logic” driving platform content moderation too often conflicts with human rights norms, particularly free speech and its vital “checking function” rationale.77

bb) Sander’s “structural” human rights law model

A second valuable contribution to online regulatory theory and design in the social media era was Barrie Sander’s “structural” human rights law model.78 Building on many of the “context-based” regulatory insights noted earlier, Sander argued that shifting to a more structural conception of human rights law would—by broadening Lavi’s approach to platform liability even further—require “[…] a more holistic and evidence-based approach to the design of intermediary liability laws that strives to account for the systemic effects of such frameworks on online expression”.79 Calling for greater state protection of free speech, Sander’s “structural approach” to regulating online content requires that sufficiently “[…] robust mechanisms of transparency, due process, accountability and oversight are embedded in platform moderation systems […]”,80 including government and cross-platform collaborations.

By examining content moderation (and data protection) liability within the wider context of rising accountability deficits pervading our digital media ecology,81 Sander took aim at the prevailing “marketized” model of human rights law in our “increasingly, privately controlled, neoliberal communication sphere”.82 In particular, Sander argued that a marketised conception premised on the laissez-faire notion of “[…] protect[ing] individual choice and agency against state intervention” is problematic for two reasons.83 First, it endorses a form of abstract individualism that “[…] neglects power asymmetries between individual users and other actors that participate in the social media ecosystem […]”.84 Second, it pays limited attention “[…] to the systemic effects of state and platform practices on the social media environment as a whole”.85

In response, Sander endorsed a “structural” conception of human rights law, one typified by “a greater openness to positive state intervention as a means of safeguarding public and collective values such as media pluralism and diversity”.86 By doing so, Sander aimed to not only contest the use of human rights discourse in the realm of social media governance,87 but to “[…] begin to close the accountability deficits associated with content moderation […]” that increasingly threaten our democracies.88 While leaving the regulatory details unspecified, Sander’s commitment to preserving the “functionality” of our digital public sphere provides important normative grounds for expanding our regulatory toolbox to include “common carrier” doctrine for mitigating platform censorship and increasing the quantity and diversity of democratic discourse.89

In the end, when interpreted in light of Murray’s three-dimensional “complexity matrix” and Lavi’s “descriptive social technological” model of platform liability, Sander’s model again attests to the vital importance for online regulators of turning their minds to the broader regulatory environment—including its primary stakeholders’ economic motives and discursive predilections—for clues to calibrating our regulatory interventions to better promote international human rights, domestic policy goals, and the health of our online environment.

c) Content moderation as “systems thinking”

A third indication as to the nature and limitations of Canada’s “systems-based” model can be inferred from scholarship endorsing a “second wave” of more sophisticated regulatory frameworks for online content moderation. Looking to step outside overly reductionist models, legal scholars have continued to incorporate key concepts and insights from systems theory to optimise our understanding and regulation of today’s digital media environment.

aa) Douek’s “monitored self-regulation” model of content moderation

A third notable contribution to online regulatory theory and design in the social media era was Evelyn Douek’s ambitious reframing of content moderation (and its regulatory dynamics) in terms of “systems thinking”.90 Arguing that today’s content moderation models (e.g. “notice-and-action” and “market self-regulation”) are equally outdated, misleading, and incomplete,91 Douek claimed that the “blind spots” and mistaken assumptions of this “standard” regulatory picture—a “first wave” of regulation focused incorrectly on ex post review of individual online posts and error correction—must be updated and replaced with a “second wave” capturing the underlying “patterns and interrelationships” of our modern regulatory landscape. As Murray foresaw a generation earlier, Douek maintained that content moderation is ultimately a complex and dynamic system of “mass speech administration”,92 which requires wide-ranging procedural design interventions focused more on “[…] systems rather than individual cases, on wholes and interrelationships rather than parts, and on ‘patterns of change rather than static snapshots’”.93

Starting from the sensible bases that “there will never be agreement on what constitutes ‘good’ content moderation”94 and—perhaps most importantly—that “the status quo of private companies determining matters of […] public significance without any form of accountability, transparency, or meaningful public input is inadequate”,95 Douek’s main regulatory objective involves achieving “meaningful accountability” by reframing content moderation as a complex and dynamic administrative system.96 Endorsing a self-styled “substance-agnostic” approach,97 Douek’s regulatory framework draws on familiar “principles and practices of administrative law”,98 focused more on “key ex ante and systemic decision-making” taking place outside and upstream of the standard picture’s familiar “assembly line” of ex post individual review and error correction. Rather than providing “substantive” reforms, Douek’s overriding objective of mitigating online “accountability deficits”—a policy aim endorsed earlier by Sander—requires adopting two coordinate sets of structural and procedural reforms.

First, any proper system of “mass speech administration” must begin by restructuring internal platform moderation bureaucracies to avoid unreported bias and to incentivise neutral enforcement of their Terms of Use.99 Douek’s “separation of functions” principle hence requires intra-corporate separations of personnel and functions “that aim to ‘eliminate the incentives that would make [biased] conduct possible or likely in the first place’”.100 Second, rather than relying on “user-initiated complaints in individual cases”,101 a more comprehensive governance framework must authorise a suitable regulatory body—as reflected by Canada’s proposed Digital Safety Commission—to operate an “external channel” for fielding complaints and conducting its own investigations. Third, to best facilitate regulatory oversight of complex content moderation systems, platforms should be required “to disclose the nature and extent of involvement of outside decisionmakers in their content moderation […]”,102 including external “fact-checkers” and (at least in theory) government agencies. Lastly, as accepted by Canadian legislators, Douek proposed a scheme of regular platform reporting obligations (i.e. Digital Safety Plans) designed to expose “the broader functioning of their [content moderation] systems”,103 which purports only to improve accountability and to prevent regulators from “legislating in the dark”.104

Besides these structural reforms, Douek argued that optimising regulatory accountability requires digital platforms to comply with three procedural fiats. First, while admitting that platform self-reporting “may sound like a feeble form of accountability”,105 and that the “[e]mpirical effects of speech regulation are deeply contested”,106 platforms should nonetheless produce “annual content moderation plans and compliance reports”.107 Besides forcing them “to think proactively and methodically about potential operational risks”,108 as illustrated by Canada’s proposed Digital Safety Plans, Douek maintained that such disclosures can benefit regulatory efforts by: (1) creating a “paper trail” of platform decision-making that “facilitat[es] future review and accountability”;109 (2) facilitating policy learning by encouraging “cross-industry reporting” and formulating “general compliance standards” or “best practices”;110 and (3) much like Canada’s own consultative approach, facilitating public involvement through “multi-stakeholder” engagement into proposed regulations.111 Regardless of their efficacy, Douek sensibly insisted that as “a necessary first step to more sweeping reform”, we must first admit that “[t]here is […] no way of currently knowing what platforms have been doing, what works, and what doesn’t”.112

Douek’s second procedural proposal also aimed to improve informational transparency, in this case by requiring platforms both to demonstrate that “they have quality assurance […] measures in place for their decision-making systems”113—a core internal administrative law requirement—and to subject such self-assurances of “quality” to “independent auditing”.114 As Douek rightly cautioned, without independent verification, such “[…] transparency reports could be as accurate as Enron’s financial statements […]”.115 A third and final procedural recommendation would require platforms to offer an “aggregated review mechanism[]”.116 Instead of mandating appeals and procedural protections for individual online users, Douek insisted that to better identify and address system-wide trends, patterns, and failures, platforms should “review, as a class, all adverse decisions in a certain category of rule violation over a certain period […]”.117 Drawing on analogies to the EU’s data protection regime (i.e. General Data Protection Regulation), Douek professed that these structural and procedural proposals together amounted to a model of “monitored self-regulation”, one that is more dynamic, better at leveraging the particular capacities of private and public actors, and can generate a virtuous cycle of continuous iterative improvements.118

In the end, despite Douek’s worthy aim of prompting a “second wave” of content moderation theory and regulatory design, many important aspects of her framework remain underdefined extensionally (e.g. capturing the extent of regulatory activity in our global public sphere),119 and significantly undertheorised—ironically in the areas of “systems-theory” and accountability scholarship.120 Owing to perfunctory engagement with these vital foundational materials—and adopting an unnecessarily narrow view of “digital platforms” as the main unit of regulatory analysis—Douek’s model leaves the following broader regulatory issues unexamined: (1) the rising structural threats to democracy posed by the Internet’s ad-based business model, including its impact on over-filtering and over-blocking, and its overall effects on the quantity and quality of democratic discourse; and, (2) the implications of a “systems-based” model for facilitating regulatory capture and sanctioning (perchance unintentionally) privatised government censorship.

2. A way forward: Regulatory insights from social medicine and diagnostics

Despite these residual scholarly gaps, perhaps the most valuable lesson that has emerged from our review of “systems-inspired” models is harnessing their collective capacity for optimising regulatory “diagnosis and improvement”—an important remedial goal of Douek’s model.121 Taking up this implicit mantle, further indications as to the nature and limitations of Canada’s proposed regulatory model are afforded by expanding our inquiry into the instructive parallels between the legal and medical sciences.

a) Insights from social medicine and theoretical biology

As we have maintained in the past,122 any regulatory framework aimed at “cracking the code” of online communications will benefit from exploring the considerable synergies between law and medicine.123 Recommending this same source of interdisciplinary insight when searching for suitable regulatory interventions in cases of constitutional limitation or infringements on liberty, US Supreme Court Justice Benjamin Cardozo encouraged both courts and legislators alike to increasingly turn to “[…] medicine—to a Jenner or a Pasteur or a Virchow or a Lister as freely and submissively as to a Blackstone or a Coke”.124 Poised on the crest of revolutionary twentieth-century advances in theoretical physics, Justice Cardozo’s open-minded views have since only gained in currency in light of powerful insights generated by these new scientific paradigms within the fields of social medicine and theoretical biology.

aa) Importance of social and environmental signals to public health regulation

One specially revealing nineteenth-century German medical anecdote (and pioneering medical figure) bears mention. It concerned a typhus epidemic that broke out in the winter of 1847 in Upper Silesia, an economically depressed Prussian province. The epidemic coincided with a famine, and conditions deteriorated so badly that government intervention became necessary. Following time-honoured practice, an outside expert was appointed to survey the situation and submit a regulatory report. The individual chosen for this seemingly routine task was the physician Rudolf Virchow, then aged 26 years, and a junior lecturer in pathology at the Charité Hospital in Berlin.

The report based on his three weeks’ observation was revolutionary for its time and even now sets a standard for attempting to understand and change the social conditions that produce disease. Conspicuously, Virchow’s ‘medical’ proposals were quite limited. Since he based the origins of ill health in broader social conditions, the most reasonable regulatory approach to addressing the Upper Silesian ‘epidemic’ was to identify and alter the underlying factors that permitted it to occur. Virchow reasoned:

Don’t crowd diseases point everywhere to deficiencies of society? One may adduce atmospheric or cosmic conditions or similar factors. But never do they alone make epidemics. They produce only where due to bad social conditions people have lived for some time in abnormal situations. Typhus would not have spread epidemically in Upper Silesia if there had not been a physically and mentally neglected people […].125

Evidencing a growing awareness of the complex interrelationships between medicine, social conditions, and political reform, Virchow later insisted that if medicine was to fulfill its great task, then it must enter the public realm, famously declaring:

Medicine is a social science, and politics is nothing else but medicine on a large scale. Medicine, as a social science, as the science of human beings, has the obligation to point out problems and to attempt their theoretical solution: the politician, the practical anthropologist, must find the means for their actual solution […].126

Insisting that “[t]he physicians are the natural attorneys of the poor, and the social problems should largely be solved by them”,127 Virchow envisioned a medical profession that obliged physicians to investigate the complex relationships between socio-political stressors and corporeal experience. Virchow’s intriguing reversal of the traditional roles of doctors and lawyers was borne from a deep conviction that medicine’s clinical realities must inform society’s organisation and structure, predominantly through careful design of its laws and regulations. Stressing their importance as society’s dominant prescriptive force, Virchow stated: “If medicine is the science of man both healthy and ill, which after all it should be, what other science could then be more appropriate to deal with law-making, in order to apply the laws that are given in mankind’s nature to the foundations of the organization of society”?128

At last, while Virchow’s inquiries into the social origins of illness were to help establish the interdisciplinary scientific field of “social medicine”, these issues quickly fell from sight owing to more reductionist scientific developments that shaped the course of medicine during the late-nineteenth century—particularly the germ theory of disease.129

bb) The biopsychosocial response: A “systems-based” paradigm of health and illness

The urgency for developing a new medical paradigm responsive to such diagnostic blind spots was reinforced by George Engel.130 In Engel’s view, medicine was in crisis because of its adherence to a disease model that was no longer adequate for its scientific tasks and social responsibilities.131 Like Virchow before him, Engel hoped for an epistemological shift in medical science focused on greater interaction, with renewed emphasis on defining adaptive genetic and epigenetic limitations as they are set by broader social and environmental signals. Arguing for a revolutionary “systems-inspired” biomedical paradigm—one typified by a transactional, holistic, analogical, and probabilistic approach—Engel effectively confirmed Virchow’s more tentative causal inferences, instructing:

No linear concept of etiology is appropriate; rather, the pathogenesis of disease involves a series of negative and positive feedbacks with multiple simultaneous and sequential changes potentially affecting any system of the body. The central nervous system is so organized that a reciprocal interrelationship between the mental apparatus and the rest of the body in the pathogenesis of disease states and maintenance of health is not only possible but inevitable.132

Among its implications, Engel’s general systems theory-inspired “biopsychosocial” model requires physicians to explore complex relationships between social stressors and bodily experience, to study how the corporealisation of cultural experience occurs, and to explore humanity’s adaptive limits to rising levels of immunological stressors. Reflecting the “systems thinking” that led Rudolf Virchow to designate nineteenth-century physicians “the natural attorneys of the poor”,133 this new model implicated physicians in wider political debates from which modern conceptions of suffering and disease often insulate them, a point shown by containing suffering within the sole rubric of prevailing (and potentially misleading) microbiological and genetic disease models.134

In the end, Engel anticipated that as the social bases of health and illness were gradually revealed, new avenues of research could be opened in precisely the way that Thomas Kuhn had in mind—generating a “systems-inspired” paradigm shift in medical science that might through its example advance broader socio-political regulations.135 That is, Engel’s “biopsychosocial” paradigm might yet inspire and foster amongst today’s regulators a similar perspectival shift in global online governance—in this case, to a more scientifically probing and less ideologically encumbered and contextually reductionist “systems-inspired” approach.

b) Regulatory insights from medical diagnostics

Besides these structural insights from social medicine and theoretical biology, valuable clues for designing “systems-inspired” regulatory models can also be grasped from the principles and methods of medical diagnostics.

aa) The diagnostic process: “Clinical reasoning” in conditions of uncertainty

Instructive synergies between “systems-based” regulatory approaches and the principles and practices of medical diagnosis can be shown by analysing the latter’s three conceptual pillars.

First, and above all, diagnosis is a process.136 As with “systems-based” models committed to optimising “learning and iterative” regulatory outcomes, medical diagnosis consists of a similarly cyclical and “continuous process of information gathering, integration, and interpretation [that] involves hypothesis generation and updating prior probabilities as more information is learned” about hidden dysfunctions.137 Moreover, similar to regulatory measures directed at rectifying dysfunctions in complex networked environments, the diagnostic process encompasses a self-reflexive method of “modification and refinement” that operates under conditions of regulatory uncertainty.138 As Professor Jerome P Kassirer, MD explained:

Absolute certainty in diagnosis is unattainable, no matter how much information we gather, how many observations we make, or how many tests we perform. A diagnosis is a hypothesis about the nature of a patient’s illness, one that is derived from observations by the use of inference. As the inferential process unfolds, our confidence as [clinicians] in a given diagnosis is enhanced by the gathering of data that either favor it or argue against competing hypotheses. Our task is not to attain certainty, but rather to reduce the level of diagnostic uncertainty enough to make optimal therapeutic decisions.139

Of upmost relevance to regulatory interventions, a critical issue through the diagnostic process then is deciding when sufficient information has been obtained to make a reliable diagnosis.

Second, this shared decision-making context of “diagnostic indeterminacy” has inspired a common evaluative approach. Namely, much like the importance of political experience and judgment to formulating useful legislative measures, “[a]ccurate, timely, and patient-centered diagnosis relies on proficiency of clinical reasoning”,140 an evaluative process that involves the proper exercise of “judgment under uncertainty”.141 Based “[…] within clinicians’ minds (facilitated or impeded [contextually] by the work system)”,142 and influenced by “dual process theory” (i.e. a combination of analytical and non-analytical models), clinical reasoning has been defined by the National Academy of Sciences “[…] as the clinician’s quintessential competency”—being “the cognitive process that is necessary to evaluate and manage a patient’s medical problems”.143

Third, the conceptual model of medical diagnosis also demonstrates—not unlike Murray and Sander’s “systems-inspired” regulatory models—that the diagnostic process takes place within a complex, dynamic, and interrelated context (i.e. “work system”), consisting of: (1) diagnostic team members; (2) tasks; (3) technologies and tools; (4) organisational factors; (5) the physical environment; and (6) the external environment. As with “systems thinking” more generally, it is crucial to recall that—like Murray’s “complexity thesis” and the many levels of abstraction involved in Engel’s “biopsychosocial” model—this diagnostic “work system” provides the inescapable context within which evaluative decision-making occurs, meaning—perhaps, above all—that “[a]ll components of the work system interact, and […] affect the diagnostic process […]”.144 In short, all is relational.

bb) Regulatory lessons: Indeterminate interventions in multi-ordinal environments

As seen from medical diagnostics’ three conceptual pillars, the parallels between the decision-making processes and requirements of clinical reasoning and “systems-based” online regulatory models are salient, pointing to several key lessons.

First, there exists a striking similarity-of-structure between Murray’s earlier regulatory proposals and the nature of diagnostic science. Despite his settled view of the indeterminacy of the online environment, Murray’s conviction of its malleability and amenability to regulation prompted endorsement of a “three-step” protocol remarkably like the diagnostic process. His self-reflexive stages of environmental mapping, regulatory interventions, and evaluation and incorporation of regulatory feedback essentially restate the three diagnostic stages of information gathering, integration and interpretation, and updating working hypotheses.

Second, as also forecasted by Murray and his “complexity thesis”—much like reframing health and illness within a broader biopsychosocial framework—cyberspace must be similarly understood as a complex networked environment.145 Besides Murray’s regulatory call for a “non-interventionist” approach,146 the self-reflexive method driving medical diagnosis speaks (at the very least) to the fundamental procedural necessity of engaging in unbounded probing of potential aetiological (or regulatory) factors well before ending the investigative process. Freed from unnecessary ideological impediments and investigatory blind spots, “systems-inspired” regulatory approaches must take seriously a full panoply of potential causal/aetiological factors. In other words, before regulatory problems can be effectively “overcome”, all relevant factors must first be tabled for consideration.

Lastly, this commitment to minimally encumbered scientific investigation significantly amplifies the structural regulatory concerns of Murray, Lavi, and Sander. By incorporating the broader “work system” into the diagnostic process—and its implicit recognition of the causal influences of the “physical” and “external” environments—scientific inquiry is not only freed from “blind spots” compromising our diagnosis of hidden dysfunctions, but for crafting suitable prescriptions or “treatments”. Importantly, our comprehensive review of leading “systems thinking” models demonstrates that even together they exhibit insufficient attention to confirming the systemic effects and prescriptive implications of state regulations and content moderation practices on the overall health of our digital public sphere. Whether in regulatory or academic contexts, more work needs to be done. When considered in light of Canada’s proposed Online Harms Act, the relevantly overlooked “social and environmental signals” would appear to be the economic drivers of contemporary digital censorship (i.e. over-filtering and over-blocking), and the relationship between its “systems-based” transparency obligations and the rise of privatised government censorship—factors intuited by average Canadians, but not taken up satisfactorily by either of their expert advisors or political representatives.

V. Conclusion

As we have seen, with the possible exception of Murray’s original “complexity thesis”, growing appeals to “systems-based” online regulatory approaches by legal commentators and regulators alike would appear to be at considerable risk of overpromising and underdelivering. It is more than ironic that whilst engaging in a comprehensive review of this burgeoning “second wave” of “systems-inspired” regulatory material, it remains difficult (if not impossible) to acquire a full complement of the “patterns and interrelationships” that Canadian legislators initially seemed so desperate to acquire. Despite their individual contributions, what remains to be done—indeed, very much like acquiring missing pieces of “a bigger puzzle”—is incorporating each scholar’s theoretical contributions and insights into a broader, composite regulatory framework better suited to tracking the systemic effects of state and platform practices on the overall social media environment. A critical and largely ignored component of any genuine “systems-inspired” regulatory approach must be to embrace systemic causation.

This need for adopting a more integrative approach to online phenomena was also shown by profound insights native to social medicine and diagnostic theory. Besides providing a convincing case for expanding aetiological (and regulatory) inquiry to include the effects of social and environmental signals, established principles of medical diagnosis also provided a valuable decision-making protocol for online regulators. Here too, our extensive review of leading “systems-inspired” regulatory models indicates that the nearest we can expect to approximate the scientific neutrality and openness of the diagnostic method is to combine the contributions of leading scholars into a comprehensive system. Rather than supporting current regulatory preoccupations with harmful online content—as shown by Canada’s over-criminalisation of hate offences in its proposed Online Harms Act—early indications point to taking more seriously the underlying infrastructure and economic drivers not only of harmful content and disinformation, but of rising censorship and risks of over-regulating online speech interests.

The key takeaway from our review of “systems-inspired” regulatory scholarship and medico-diagnostic principles consequently is that prevailing “systems-based” regulatory approaches—as epitomised by Canada’s new Online Harms Act—would appear to function as a “blueprint” for privatised government censorship, providing regulators with the legislative mandate, informational transparency, and compliance authority for regulatory capture that leading free speech scholars have appropriately labelled the “moderators’ dilemma”. That is to say, “the more speech-protective the government’s policy, the more hands-on the government’s approach will need to be”.147 As shown by Canada’s newly proposed “systems-based risk-assessment” model, this unsettling trade-off “sewn into the logic of the Internet”, not only appears to apply to combating increasing online censorship by using “must-carry” legal interventions (i.e. common-carrier laws preventing the exclusion of speakers or restricting content), but to all regulatory “proxy-censor” interventions aimed at tamping down harmful online content.148 Since Canadian regulators have not engaged in an uncompromising “differential diagnosis” of online phenomena—which, as we have seen, benefits diagnosticians and legislators alike by situating the patient’s or public sphere’s symptoms in their broadest aetiological context—we are tempted, perhaps ironically, to look not to the future, but to the distant past.

After all these years, Virchow’s pioneering view on the diagnosis and regulation of public health remains an invaluable perspective that Canada and other countries would do well to study and apply. In a dynamic, interconnected world increasingly at odds with the principle of territoriality—where “physicians are the natural attorneys of the poor”, and politicians its “natural anthropologists”—it is with some surprise and much regret that it remains a matter of any controversy or dismay that we lawyers and jurists should bear a greater share of the solemn responsibility of being its “natural diagnosticians”.


  1. See generally M. McLuhan, The Gutenberg Galaxy: The Making of Typographic Man, 1962; M. McLuhan, Understanding Media: Extensions of Man, 1964. See also S. Grampp, Marshall McLuhan: Eine Einführung, 2011.↩︎

  2. See R. Stephenson and J. Rinceanu, “Digital Iatrogenesis: Towards an Integrative Model of Internet Regulation”, (2023) 1 eucrim, 73. See generally L. Floridi, The Fourth Revolution: How the Infosphere is Reshaping Human Reality, 2014; L. Floridi, “The End of an Era: from Self-Regulation to Hard Law for the Digital Industry”, (2021) 34 Philosophy & Technology, 619.↩︎

  3. See e.g. E. B. Laidlaw, Regulating Speech in Cyberspace: Gatekeepers, Human Rights and Corporate Responsibility, 2015, p. 15; H. Jenkins et al., Confronting the Challenges of Participatory Culture: Media Education for the 21st Century, 2006.↩︎

  4. See e.g. G. M. Dickinson, “Big Tech’s Tightening Grip on Internet Speech”, (2022) 55 Indiana Law Review, 101; M. Moore and D. Tambini (eds.), Digital Dominance: The Power of Google, Amazon, Facebook, and Apple, 2018; K. Langvardt, “A New Deal for the Online Public Sphere”, (2018) 26 George Mason Law Review, 341, 381.↩︎

  5. See R. J. Hamilton, “Governing the Global Public Square”, (2021) 62 Harvard International Law Journal, 117; J. Peters and B. Johnson, “Conceptualizing Private Governance in Networked Society”, (2016) 18 North Carolina Journal of Law and Technology, 15.↩︎

  6. See J. Bayer, “Rights and Duties of Online Platforms”, in: J. Bayer et al. (eds.), Perspectives on Platform Regulation: Concepts and Models of Social Media Governance Across the Globe, 2021; K. Klonick, “The New Governors: The People, Rules, and Processes Governing Online Speech”, (2018) 131 Harvard Law Review, 1598.↩︎

  7. See U.S. Supreme Court, Biden v Knight First Amendment Institute, 141 S. Ct. 1220, 1221 (2021) (Thomas J., concurring).↩︎

  8. See S. Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, 2019; J. M. Balkin, “Old-School/New-School Speech Regulation”, (2014) 127 Harvard Law Review, 2296.↩︎

  9. See Brief of Professor P. Hamburger as Amicus Curiae in Support of Neither Party, U.S. Supreme Court, 24 October 2022, Ashley Moody, Attorney General of Florida et al. v. NetChoice, LLC et al., No. 22-277; J. M. Balkin, “Free Speech is a Triangle”, (2018) 118 Columbia Law Review, 2011.↩︎

  10. R. Stephenson and J. Rinceanu, (2023) 1 eucrim, op. cit. (n. 2), 73, 73–74.↩︎

  11. See Gesetz zur Verbesserung der Rechtsdurchsetzung in sozialen Netzwerken (Netzwerkdurchsetzungsgesetz – NetzDG) from 1 September 2017 (BGBl I p. 3352).↩︎

  12. See R. Aiello, “Where Does the Liberal Promise to Address Harmful Online Content Stand?”, CTV News.ca, 30 August 2022, para. 2, available at <https://www.ctvnews.ca/politics/where-does-the-liberal-promise-to-address-harmful-online-content-stand-1.6048720> accessed 5 April 2024.↩︎

  13. R. Aiello, op. cit. (n. 12), para. 17.↩︎

  14. R. Aiello, op. cit. (n. 12), para. 13.↩︎

  15. See e.g. Hamburger Brief, op. cit. (n. 9); S. Zuboff, op. cit. (n. 8); J. M. Balkin, (2014) 127 Harvard Law Review, op. cit. (n. 8), 2296; J. M. Balkin, “The First Amendment is an Information Policy”, (2012) 41 Hofstra Law Review, 1, 27–28 et seq., where the author provides a useful roadmap of “new school censorship”.↩︎

  16. See e.g. K. Langvardt, “Regulating Online Content Moderation”, (2018) 106 The Georgetown Law Journal, 1353, 1363.↩︎

  17. See e.g. R. Virchow, “Der Armenarzt”, (1848) 18 Die Medicinische Reform, 125.↩︎

  18. NetzDG, op.cit. (n. 11); Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act) [2022] OJ LI277/1. See also Online Safety Act 2023 (UK).↩︎

  19. See J. Bayer, op. cit. (n. 6), p. 30.↩︎

  20. See R. Stephenson and J. Rinceanu, (2023) 1 eucrim, op. cit. (n. 2), 73, 74.↩︎

  21. See J. Bayer et al., “Conclusions: Regulatory Responses to Communication Platforms: Models and Limits”, in: J. Bayer et al. (eds.), op. cit. (n. 6), p. 571.↩︎

  22. See J-M. Kamatali, “‘Hate Speech’ in America: Is it Really Protected?”, (2021) 61 Washburn Law Journal, 163; J-M. Kamatali, “The Limits of the First Amendment: Protecting American Citizens’ Free Speech in the Era of the Internet & the Global Marketplace of Ideas”, (2015) 33 Wisconsin International Law Journal, 587.↩︎

  23. See Department of Canadian Heritage, “What We Heard: The Government’s Proposed Approach to Address Harmful Content Online” (Government of Canada, 3 February 2022), Lack of Definitional Detail section, para. 2, available at <https://www.canada.ca/en/canadian-heritage/campaigns/harmful-online-content/what-we-heard.html> accessed 5 April 2024.↩︎

  24. Department of Canadian Heritage, “What We Heard”, op. cit. (n. 23), Proactive Monitoring Obligation section, para. 1 (emphasis in original).↩︎

  25. Department of Canadian Heritage, “What We Heard”, op. cit. (n. 23), 24-hour Inaccessibility Requirement section, para. 1.↩︎

  26. Department of Canadian Heritage, “What We Heard”, op. cit. (n. 23), Alternative Approaches section, para. 1 (emphasis added).↩︎

  27. Department of Canadian Heritage, “What We Heard”, op. cit. (n. 23), Alternative Approaches section, para. 1 (emphasis added).↩︎

  28. Department of Canadian Heritage, “What We Heard”, op. cit. (n. 23), The Necessity of New Regulators section, para. 1.↩︎

  29. Department of Canadian Heritage, “What We Heard”, op. cit. (n. 23), Transparency and Accountability Requirements section, para. 2.↩︎

  30. See Department of Canadian Heritage, “The Government’s Commitment to Address Online Safety” (Government of Canada, 8 July 2022), p. 2 (emphasis added), available at <https://www.canada.ca/en/canadian-heritage/campaigns/harmful-online-content.html> accessed 5 April 2024.↩︎

  31. See Department of Canadian Heritage, “Introductory Session” (Government of Canada, 28 April 2022), Theme F: The Regulatory Toolkit section, para. 3, available at <https://www.canada.ca/en/canadian-heritage/campaigns/harmful-online-content/introductory-session.html> accessed 5 April 2024.↩︎

  32. See Department of Canadian Heritage, “Summary of Session Two: Objects of Regulation” (Government of Canada, 28 April 2022), Takedown Requirements section, para. 2 (emphasis added), available at <https://www.canada.ca/en/canadian-heritage/campaigns/harmful-online-content/session-two-summary.html> accessed 5 April 2024.↩︎

  33. See Department of Canadian Heritage, “Summary of Session Three: Legislative and Regulatory Obligations” (Government of Canada, 6 May 2022), Human Rights section, para. 4 (emphasis added), available at <https://www.canada.ca/en/canadian-heritage/campaigns/harmful-online-content/summary-session-three.html> accessed 5 April 2024; Department of Canadian Heritage, “Supplemental Worksheet: Legislative and Regulatory Obligations” (Government of Canada, 10 May 2022), General Monitoring Scheme section, para. 1 (emphasis added), available at <https://www.canada.ca/en/canadian-heritage/campaigns/harmful-online-content/session-five-legislative-regulatory-obligations.html> accessed 5 April 2024.↩︎

  34. See Department of Canadian Heritage, “Summary of Session Six: Freedom of Expression and Other Rights” (Government of Canada, 27 May 2022), Theme A: Charter Rights section, para. 3, available at <https://www.canada.ca/en/canadian-heritage/campaigns/harmful-online-content/summary-session-six.html> accessed 5 April 2024.↩︎

  35. See Department of Canadian Heritage, “Summary of Session Three”, op. cit. (n. 33), Theme A: Duties Imposed on Regulated Services section, para. 2.↩︎

  36. Department of Canadian Heritage, “Summary of Session Three”, op. cit. (n. 33), Human Rights section, para. 3 (emphasis added).↩︎

  37. See Department of Canadian Heritage, “Summary of Session Eight: Disinformation” (Government of Canada, 10 June 2022), Theme A: Understanding the Magnitude of the Challenge section, para. 5 (emphasis added), available at <https://www.canada.ca/en/canadian-heritage/campaigns/harmful-online-content/summary-session-eight.html> accessed 5 April 2024.↩︎

  38. Department of Canadian Heritage, “Summary of Session Eight”, op. cit. (n. 37) (emphasis added).↩︎

  39. Department of Canadian Heritage, “Summary of Session Eight”, op. cit. (n. 37), Theme C: Approaches to Addressing Disinformation and its Effects through a Risk-Based Approach section, para. 6.↩︎

  40. 3rd Canadian Citizens’ Assembly on Democratic Expression, “Citizens’ Assembly on Democratic Expression: Recommendations for Reducing Online Harms and Safeguarding Human Rights in Canada”, (Public Policy Forum, 2022), p. 5.↩︎

  41. 3rd Canadian Citizens’ Assembly on Democratic Expression, “Citizens’ Assembly on Democratic Expression”, op. cit. (n. 40), p. 10, p. 51 (emphasis added).↩︎

  42. 3rd Canadian Citizens’ Assembly on Democratic Expression, “Citizens’ Assembly on Democratic Expression”, op. cit. (n. 40), p. 10.↩︎

  43. 3rd Canadian Citizens’ Assembly on Democratic Expression, “Citizens’ Assembly on Democratic Expression”, op. cit. (n. 40), p. 49 (emphasis added).↩︎

  44. See Department of Canadian Heritage, “What we Heard: 2022 Roundtables on Online Safety” (Government of Canada, 31 January 2023), available at <https://www.canada.ca/en/canadian-heritage/campaigns/harmful-online-content/what-we-heard/report.html> accessed 5 April 2024. The public consultation process effectively ended with consultations with Indigenous peoples in January 2023. See Department of Canadian Heritage, “What we Heard Report: Indigenous Online Safety” (Government of Canada, 2023) available at <https://www.canada.ca/en/canadian-heritage/campaigns/harmful-online-content/what-we-heard-online-safety.html> accessed 5 April 2024.↩︎

  45. Department of Canadian Heritage, “What we Heard”, op. cit. (n. 44), Surrey, British Columbia section, para. 2.↩︎

  46. Department of Canadian Heritage, “What we Heard”, op. cit. (n. 44), Charlottetown, Prince Edward Island section, para. 4, Saskatoon, Saskatchewan section, para. 4.↩︎

  47. Bill C-63, An Act to enact the Online Harms Act, to amend the Criminal Code, the Canadian Human Rights Act and An Act respecting the mandatory reporting of Internet child pornography by persons who provide an Internet service and to make consequential and related amendments to other Acts, 1st Sess., 44th Parl., 2021–2024 (first reading 26 February 2024).↩︎

  48. Bill C-63, An Act to enact the Online Harms Act, op. cit. (n. 47), cl. 15 (Part 2: Criminal Code).↩︎

  49. Bill C-63, An Act to enact the Online Harms Act, op. cit. (n. 47), cl. 34 (Part 3: Canadian Human Rights Act).↩︎

  50. See M. L. Mueller, Networks and States: The Global Politics of Internet Governance, 2010, p. 9. See also L. DeNardis, The Global War for Internet Governance, 2014, p. 6, who rightly emphasises that Internet governance encompasses both its technological infrastructure and the substantive policies developed around both its limitations and possibilities.↩︎

  51. See e.g. R. H. Weber, Realizing a New Global Cyberspace Framework: Normative Foundations and Guiding Principles, 2015, p. 1. See also G. Frosio (ed.), The Oxford Handbook of Online Intermediary Liability, 2020, Part VII. For a pioneering article on Internet jurisdiction and the limits of the principle of territoriality, see M. Geist, “Is There a There There? Towards Greater Certainty for Internet Jurisdiction”, (2001) 16 Berkeley Technology Law Journal, 1345.↩︎

  52. R. H. Weber, op. cit. (n. 51), p. 89 (emphasis added).↩︎

  53. See A. D. Murray, The Regulation of Cyberspace: Control in the Online Environment, 2007.↩︎

  54. A. D. Murray, op. cit. (n. 53), p. 54.↩︎

  55. A. D. Murray, op. cit. (n. 53), p. 237 (emphasis in original).↩︎

  56. A. D. Murray, op. cit. (n. 53), p. 244 (emphasis in original).↩︎

  57. A. D. Murray, op. cit. (n. 53), p. 54, pp. 233–251.↩︎

  58. A. D. Murray, op. cit. (n. 53), p. 234.↩︎

  59. A. D. Murray, op. cit. (n. 53), p. 234.↩︎

  60. A. D. Murray, op. cit. (n. 53), p. 53.↩︎

  61. A. D. Murray, op. cit. (n. 53), p. 53.↩︎

  62. A. D. Murray, op. cit. (n. 53), p. 250 (emphasis added).↩︎

  63. A. D. Murray, op. cit. (n. 53), pp. 250–251.↩︎

  64. A. D. Murray, op. cit. (n. 53), pp. 250–251.↩︎

  65. A. D. Murray, op. cit. (n. 53), p. 250.↩︎

  66. A. D. Murray, op. cit. (n. 53), p. 251.↩︎

  67. See R. H. Weber, op. cit. (n. 51), p. 90; C. Reed, Making Laws for Cyberspace, 2012, p. 220. Reed preferred a “heuristic” regulatory approach based on more abstract “rules of thumb” (ibid., p. 221).↩︎

  68. M. Lavi, “Content Providers’ Secondary Liability: A Social Network Perspective”, (2016) 26 Fordham Intellectual Property Media & Entertainment Law Journal, 855. See also S. B. Spencer, “The First Amendment and the Regulation of Speech Intermediaries”, (2022) 106 Marquette Law Review, 1.↩︎

  69. M. Lavi, op. cit. (n. 68), 879.↩︎

  70. M. Lavi, op. cit. (n. 68), 888 (emphasis added).↩︎

  71. M. Lavi, op. cit. (n. 68), 909.↩︎

  72. M. Lavi, op. cit. (n. 68), 930. Importantly, Lavi argued that “notice-and-takedown” regimes are superior to negligence models in that the latter often result in legal ambiguity that has a disproportionate “chilling effect” on content moderation (ibid.).↩︎

  73. M. Lavi, op. cit. (n. 68), 910 (emphasis added).↩︎

  74. M. Lavi, op. cit. (n. 68), 879, n. 117 cites J. M. Balkin, (2014) 127 Harvard Law Review, op. cit. (n. 8), 2296. See also J. M. Balkin, “How to Regulate (and Not Regulate) Social Media”, (2021) 1 Journal of Free Speech Law, 71; J. M. Balkin, (2018) 118 Columbia Law Review, op. cit. (n. 9), 2011.↩︎

  75. M. Lavi, (2016) 26 Fordham Intellectual Property Media & Entertainment Law Journal, op. cit. (n. 68), 855, 936–937 (emphasis added).↩︎

  76. See Hamburger Brief, op. cit. (n. 9).↩︎

  77. See generally R. Stephenson, A Crisis of Democratic Accountability: Public Libel Law and the Checking Function of the Press, 2018.↩︎

  78. See B. Sander, “Democratic Disruption in the Age of Social Media: Between Marketized and Structural Conceptions of Human Rights Law”, (2021) 32 European Journal of International Law, 159.↩︎

  79. B. Sander, op. cit. (n. 78), 192 (emphasis added).↩︎

  80. B. Sander, op. cit. (n. 78), 192 (emphasis added).↩︎

  81. B. Sander, op. cit. (n. 78), 160.↩︎

  82. B. Sander, op. cit. (n. 78), 162.↩︎

  83. B. Sander, op. cit. (n. 78), 162.↩︎

  84. B. Sander, op. cit. (n. 78), 162.↩︎

  85. B. Sander, op. cit. (n. 78), 162 (emphasis added), citing E. Douek, “Governing Online Speech: From ‘Posts-as-Trumps’ to Proportionality and Probability”, (2021) 121 Columbia Law Review, 759.↩︎

  86. B. Sander, (2021) 32 European Journal of International Law, op. cit. (n. 78), 159, 162 (emphasis added).↩︎

  87. See also B. Dvoskin, “Expert Governance of Online Speech”, (2023) 64 Harvard International Law Journal, 85; S. Marks, “Human Rights and Root Causes”, (2011) 74 Modern Law Review, 57.↩︎

  88. B. Sander, (2021) 32 European Journal of International Law, op. cit. (n. 78), 159, 163.↩︎

  89. See e.g. E. Volokh, “Treating Social Media Platforms like Common Carriers?”, (2021) 1 Journal of Free Speech Law, 377; G. Lakier, “The Non-First Amendment Law of Freedom of Speech”, (2021) 134 Harvard Law Review, 2299, 2316‒2331; J. M. Balkin, (2021) 1 Journal of Free Speech Law, op. cit. (n. 74), 71, 86‒87.↩︎

  90. E. Douek, “Content Moderation as Systems Thinking”, (2022) 136 Harvard Law Review, 526. See also O. Pollicino and E. Bietti, “Truth and Deception across the Atlantic: A Roadmap of Disinformation in the US and Europe”, (2019) 11 Italian Journal of Public Law, 43.↩︎

  91. E. Douek, (2022) 136 Harvard Law Review, op. cit. (n. 90), 526, 528, 538.↩︎

  92. E. Douek, (2022) 136 Harvard Law Review, op. cit. (n. 90), 532.↩︎

  93. E. Douek, (2022) 136 Harvard Law Review, op. cit. (n. 90), 530 (emphasis added).↩︎

  94. E. Douek, (2022) 136 Harvard Law Review, op. cit. (n. 90), 606.↩︎

  95. E. Douek, (2022) 136 Harvard Law Review, op. cit. (n. 90), 606 (emphasis added).↩︎

  96. E. Douek, (2022) 136 Harvard Law Review, op. cit. (n. 90), 530, 532.↩︎

  97. E. Douek, (2022) 136 Harvard Law Review, op. cit. (n. 90), 605.↩︎

  98. E. Douek, (2022) 136 Harvard Law Review, op. cit. (n. 90), 532.↩︎

  99. E. Douek, (2022) 136 Harvard Law Review, op. cit. (n. 90), 586.↩︎

  100. E. Douek, (2022) 136 Harvard Law Review, op. cit. (n. 90), (emphasis added), citing L. M. Khan, “The Separation of Powers and Commerce”, (2019) 119 Columbia Law Review, 973, 980.↩︎

  101. E. Douek, (2022) 136 Harvard Law Review, op. cit. (n. 90), 589.↩︎

  102. E. Douek, (2022) 136 Harvard Law Review, op. cit. (n. 90), 591 (emphasis added).↩︎

  103. E. Douek, (2022) 136 Harvard Law Review, op. cit. (n. 90), 592.↩︎

  104. E. Douek, (2022) 136 Harvard Law Review, op. cit. (n. 90), 593.↩︎

  105. E. Douek, (2022) 136 Harvard Law Review, op. cit. (n. 90), 595 (emphasis added).↩︎

  106. E. Douek, (2022) 136 Harvard Law Review, op. cit. (n. 90), 594.↩︎

  107. E. Douek, (2022) 136 Harvard Law Review, op. cit. (n. 90), 593–600.↩︎

  108. E. Douek, (2022) 136 Harvard Law Review, op. cit. (n. 90), 595.↩︎

  109. E. Douek, (2022) 136 Harvard Law Review, op. cit. (n. 90), 596.↩︎

  110. E. Douek, (2022) 136 Harvard Law Review, op. cit. (n. 90), 597.↩︎

  111. E. Douek, (2022) 136 Harvard Law Review, op. cit. (n. 90), 597–598.↩︎

  112. E. Douek, (2022) 136 Harvard Law Review, op. cit. (n. 90), 600.↩︎

  113. E. Douek, (2022) 136 Harvard Law Review, op. cit. (n. 90), 601.↩︎

  114. E. Douek, (2022) 136 Harvard Law Review, op. cit. (n. 90), 601.↩︎

  115. E. Douek, (2022) 136 Harvard Law Review, op. cit. (n. 90), 601.↩︎

  116. E. Douek, (2022) 136 Harvard Law Review, op. cit. (n. 90), 602.↩︎

  117. E. Douek, (2022) 136 Harvard Law Review, op. cit. (n. 90), 602 (emphasis added).↩︎

  118. E. Douek, (2022) 136 Harvard Law Review, op. cit. (n. 90), 604.↩︎

  119. See e.g. R. J. Hamilton, (2021) 62 Harvard International Law Journal, op. cit. (n. 5), 117, 162, who argues that digital media regulation minimises key dynamics worldwide—particularly in the Global South—necessitating a flexible regulatory template based on “locally contextualised” content moderation rules.↩︎

  120. On systems theory, see e.g. N. Luhmann, Introduction to Systems Theory, 2013; G. Bateson, Steps to an Ecology of Mind: Collected Essays in Anthropology, Psychiatry, Evolution, and Epistemology, 1972; E. Laszlo (ed.), The Relevance of General Systems Theory, 1972; and L. von Bertalanffy, General Systems Theory: Foundations, Development, Applications, 1968. On accountability scholarship, see e.g. M. Bovens et al. (eds.), The Oxford Handbook of Public Accountability, 2014; M. Bovens, “Two Concepts of Accountability: Accountability as a Virtue and as a Mechanism”, (2010) 33(5) West European Politics, 946; M. Bovens, “Analysing and Assessing Accountability: A Conceptual Framework”, (2007) 13(4) European Law Journal, 447; and R. Mulgan, Holding Power to Account: Accountability in Modern Democracies, 2003. See also K. Klonick, “Of Systems Thinking and Straw Men”, (2023) 136 Harvard Law Review Forum, 139, where the author critiques Douek’s misuse of the term “systems thinking”, and points out the importance of engaging with the discipline’s foundational literature.↩︎

  121. E. Douek, (2022) 136 Harvard Law Review, op. cit. (n. 90), 526, 586 (emphasis added).↩︎

  122. See R. Stephenson and J. Rinceanu, (2023) 1 eucrim, op. cit. (n. 2), 73.↩︎

  123. R. Stephenson and J. Rinceanu, (2023) 1 eucrim, op. cit. (n. 2), 73, 78.↩︎

  124. See B. Cardozo, “Anniversary Discourse: What Medicine Can do for Law”, (1929) 5 Bulletin of the New York Academy of Medicine, 581, 584 (emphasis added).↩︎

  125. R. Virchow, Gesammelte Abhandlungen aus dem Gebiete der Öffentlichen Medicin und der Seuchenlehre, Erster Band, 1879, p. 121.↩︎

  126. R. Virchow, (1848) 18 Die Medicinische Reform, op. cit. (n. 17), 125, 125 (emphasis added).↩︎

  127. R. Virchow, “Was die ‘medicinische Reform’ will”, (1848) 1 Die Medicinische Reform, 2 (emphasis added).↩︎

  128. R. Virchow, Disease, Life and Man: Selected Essays by Rudolf Virchow, Lelland J. Rather (trn.), 1958, p. 66 (emphasis added).↩︎

  129. See R. Koch, “Die Atiologic der Tuberkulose”, (1882) 15 Berliner Klinische Wochenschrift, 221.↩︎

  130. See G. L. Engel, “A Unified Concept of Health and Disease”, (1960) 3 Perspectives in Biology and Medicine, 459; G. L. Engel, “The Need for a New Medical Model: A Challenge for Biomedicine”, (1977) 196 Science, 129.↩︎

  131. See G. L. Engel, “Misapplication of a Scientific Paradigm”, (1985) 3 Integrative Psychiatry, 9.↩︎

  132. G. L. Engel, (1960) 3 Perspectives in Biology and Medicine, op. cit. (n. 130), 459, 485 (emphasis added).↩︎

  133. R. Virchow, (1848) 1 Die Medicinische Reform, op. cit. (n. 127), 2, 2.↩︎

  134. See e.g. R. C. Strohman, “Ancient Genomes, Wise Bodies, Unhealthy People: Limits of a Genetic Paradigm in Biology and Medicine”, (1993) 37 Perspectives in Biology and Medicine, 112.↩︎

  135. T. Kuhn, The Structure of Scientific Revolutions, 2nd ed., 1970.↩︎

  136. See E. P. Balogh et al. (eds.), Improving Diagnosis in Health Care, 2015, p. 31.↩︎

  137. E. P. Balogh et al., op. cit. (n. 136), p. 32.↩︎

  138. E. P. Balogh et al., op. cit. (n. 136), p. 34.↩︎

  139. E. P. Balogh et al., op. cit. (n. 136), pp. 48–49 (emphasis added).↩︎

  140. E. P. Balogh et al., op. cit. (n. 136), p. 53 (emphasis added).↩︎

  141. E. P. Balogh et al., op. cit. (n. 136), p. 53 (emphasis added).↩︎

  142. E. P. Balogh et al., op. cit. (n. 136), p. 53 (emphasis added).↩︎

  143. E. P. Balogh et al., op. cit. (n. 136), p. 53 (emphasis added).↩︎

  144. E. P. Balogh et al., op. cit. (n. 136), p. 34 (emphasis added).↩︎

  145. See generally Y. Benkler, “A Free Irresponsible Press: Wikileaks and the Battle Over the Soul of the Networked Fourth Estate”, (2011) 46 Harvard Civil Rights-Civil Liberties Law Review, 311; Y. Benkler, The Wealth of Networks: How Social Production Transforms Markets and Freedom, 2006.↩︎

  146. A. D. Murray, op. cit. (n. 53), p. 54.↩︎

  147. K. Langvardt, (2018) 106 The Georgetown Law Journal, op. cit. (n. 16), 1353, 1363 (emphasis added).↩︎

  148. See S. B. Spencer, (2021) 32 European Journal of International Law, op. cit. (n. 78), 9, where the author stressed that both regulatory forms differ significantly in their structural effects on “the total amount of speech” reaching the public sphere, viz., that “proxy-censor regulations limit the amount of speech in circulation, whereas must-carry regulations increase the amount of speech […]” (emphasis added).↩︎