Digital Iatrogenesis
Towards an Integrative Model of Internet Regulation
Abstract
Limitations associated with online regulatory frameworks can be better understood by integrating pertinent insights from medicine and theoretical biology. Using insights from the biopsychosocial model, we argue that contemporary Internet regulations are problematic for three reasons. First, they pay insufficient attention to the unique structural characteristics of our digital media ecology, which raise significant epistemological concerns for online regulators. Second, differences in human rights protection and constitutional structure present further challenges requiring keen sensitivity to political and constitutional contexts for optimizing regulatory calibration. Third, our digital media landscape is dominated by private digital platforms whose unprecedented power and business models increasingly imperil the quality and quantity of public discourse, and facilitate privatization of government censorship under the rubric of human rights protection. Without carefully considering these structural differences, regulators – much like physicians – can too easily find themselves treating only symptoms rather than the underlying ailment.
I. Introduction
Synergies between the outwardly disparate disciplines of law and medicine can be observed well into our recent past. Addressing such affinities at a Harvard Law School lecture in 1895, the celebrated legal realist and later US Supreme Court Justice Oliver Wendell Holmes Jr proposed that “[a]n ideal system of law should draw its postulates and its legislative justification from science”.1 Years later, addressing members of the New York Academy of Medicine, Justice Holmes’ successor and great admirer Benjamin Cardozo, then Chief Judge of the New York Court of Appeals, explored the significance of this interdisciplinarity in a memorable speech entitled “What Medicine Can Do For Law”.2 Along with his realist contemporaries who conceived of lawyers as “social clinicians” in a progressive era of “scientific jurisprudence”,3 Cardozo endorsed the growing scientific trend for “continuity of knowledge”,4 which challenged traditional academic subdivisions as largely false-to-facts and misleading.5 Advocating for greater integration between the legal and medical sciences, Cardozo proposed that when searching for answers to problems of constitutional limitation or permissible encroachments on liberty, courts and legislatures should increasingly turn to “[…] medicine – to a Jenner or a Pasteur or a Virchow or a Lister as freely and submissively as to a Blackstone or a Coke”.6 Importantly, theirs was a time when felt necessities required physicians to concentrate on “individual” practices of diagnosis and prescription, while solutions to broader social problems were thought the sole purview of lawyers and politicians.7 In an era of growing scientific rivalry between analytical research and intellectual synthesis, both Justices endorsed the latter by encouraging a multi-dimensional approach to scientific and legal fact finding, formulating value judgments, and charting effective political and legal reforms.
In today’s digital media environment, any sustained course of intellectual isolationism is neither feasible nor desirable. As shown by the European Union’s latest regulatory framework,8 along with parallel North American developments aiming to remedy offensive online content,9 there remains an urgent need for our medical and legal professions to join forces in seeking effective solutions to global Internet regulation by better understanding online social problems that have radically changed their epistemic nature and receptiveness to standard politico-legal interventions. Whether considering Europe’s ascendant “notice-and-takedown” model, which relies upon and strengthens public/private co-optation – or the North American model of “market self-regulation”, which immunizes digital intermediaries from liability for speech torts and provides greater protection for “offensive” speech – these models represent different approaches to regulating online communications, and symbolize profound disagreement on free speech’s role and relationship to democratic governance.
In this article, we argue that contemporary Internet regulations are problematic for three reasons. First, they pay insufficient attention to the unique structural characteristics of our digital media ecology, which raise significant epistemological concerns for online regulators. Without carefully considering these structural differences, regulators – much like physicians – can too easily find themselves treating only symptoms rather than underlying diseases and their aetiology. Second, differences in human rights protection and constitutional structure present further challenges, particularly in filtering and blocking online speech, which require keen sensitivity to political and constitutional contexts for optimizing regulatory calibration. Third, the unprecedented power of private digital platforms that own and effectively control the Internet’s infrastructure facilitates privatized government censorship which, along with existing economic incentives, imperils the quality and quantity of public discourse.
Overall, we are confronting a unique regulatory dilemma involving the balancing of many “opposed maximisers”, such as freedom of expression, social media platforms’ interests in censoring and selling user content for profit, and the functional needs of deliberative democracy and holding power to account. To adapt a phrase popularized by philosopher and social critic Ivan Illich, any resulting imbalance in our online regulatory milieu can be fairly seen as lying at humanity’s collective feet – a new, potentially more dangerous form of “digital iatrogenesis” is now upon us.10
II. Online Governance in Europe
Internet regulation is dominated in Europe by an emergent “notice-and-takedown” approach. Leading examples are Germany’s pioneering Network Enforcement Act (Netzwerkdurchsetzungsgesetz – NetzDG),11 and the EU’s new Digital Services Act (DSA).12
1. Germany’s NetzDG: “notice-and-takedown” model
The world’s principal Internet regulatory model is epitomized by Germany’s NetzDG, which entered into force on 1 October 2017. Intending to improve upon digital intermediaries’ efforts to address problematic online content by modifying their Terms of Use, NetzDG introduced a mandatory regulatory framework, which included severe penalties for non-compliance. From inception, NetzDG triggered controversy and widespread concern about its implications for freedom of speech and fundamental rights, both within and outside Germany.13
Employing a “notice-and-takedown” approach necessitating extensive public and private co-operation, NetzDG obliges digital media platforms to delete or block illegal online content within prescribed time periods ranging from 24 hours to seven days.14 NetzDG defines “illegal content” by referencing numerous infractions in Germany’s Criminal Code, including such reputational and public order offences as insult and disturbances to the public peace.15 Digital platforms are obliged to inform complainants of their decisions and reasoning, and must indicate any rights of appeal.16 Platforms are further obliged to report their content moderation activities on their websites and in the German Federal Gazette (Bundesanzeiger).17 Notably, platforms are obliged to report potentially criminal content – including relevant IP addresses – to Germany’s Federal Criminal Police Office (Bundeskriminalamt).18 Online users will be notified no earlier than four weeks after this transmission. Penalties for non-compliance under NetzDG are harsh. Systematic non-compliance attracts fines of up to €50 million for corporate entities, and up to €5 million for corporate officials.
Germany’s approach to regulating online communications has proven immensely popular, with over 25 countries and the EU having adopted or proposed legislation that directly or indirectly follows NetzDG’s example.19
2. EU’s Digital Services Act: “notice-and-action” model
There is perhaps no greater evidence of NetzDG’s influence than recent enactment of the EU’s DSA.20 Designed as a cornerstone for shaping Europe’s digital future, DSA aims to create a safe, predictable, and trustworthy online user environment.21 In particular, DSA aims to “harmonize” online governance by countering harmful online content – particularly hate speech, disinformation, and other objectionable content – in a manner consistent with fundamental rights.
Directly applicable to all 27 EU Member States, DSA imposes on EU-based private digital intermediaries the primary responsibility for handling illegal online content.22 Similar to NetzDG’s “notice-and-takedown” model, DSA introduces a “notice-and-action” mechanism that requires digital platforms to provide an accessible and user-friendly procedure by which users can complain about illegal online content. The pivotal aspect is the concept of “illegal content”, which is defined in Art. 3(h) DSA as: “[…] any information that […] is not in compliance with Union law or the law of any Member State which is in compliance with Union law, irrespective of the precise subject matter or nature of that law”. This definition is thus significantly broader than the German counterpart in § 1(3) NetzDG, which covers only violations of designated criminal provisions. Penalties for non-compliance can be significant, and are indexed to platforms’ size and their degree of impact on the public sphere.23
Complaints of “illegal content” can come from two sources – individuals or entities. Regardless of source, DSA requires platforms to respond in a timely, diligent, non-arbitrary and objective manner, notifying them of their decision and any possible legal remedies.24 Notices submitted by “trusted flaggers” are given priority and processed on an expedited basis.25 “Trusted flagger” status is granted under DSA to public or private entities (i.e. not individuals) with sufficient expertise and competence handling illegal content (e.g. Europol, INHOPE Association). Finally, a key component of the “notice-and-action” model is Art. 9 DSA, which requires platforms to comply with EU Member State orders to act against specific items of illegal online content.
DSA differs from NetzDG in several material respects. First, DSA decrees no specific period for content removal, requiring instead a “timely” decision, thereby allowing platforms additional flexibility to review challenged content. In fact, digital platforms are exempted from liability if they act “diligently” to delete or block access to illegal content or activities. Consistent with its aim of respecting fundamental rights, platforms must also explain to users any restrictions imposed and their legal or contractual basis.26 Users can appeal platforms’ content moderation decisions through internal complaint-handling mechanisms, out-of-court dispute settlement, or judicial redress.27 Second, unlike NetzDG’s strict requirements, DSA requires platforms to notify authorities only when they are aware of information giving rise to a suspicion that a criminal offence involving a threat to life or personal safety has or is likely to occur.28 Third, DSA does not oblige digital platforms to vigilantly monitor their website traffic for illegal content.29
In the end, by enacting DSA, the EU aims to not only guarantee a trustworthy online environment that effectively counters illegal online content, but to offer a regulatory “complete code”. Consistent with this “harmonization” aim, DSA will supersede national regulations relating to matters falling within its scope.30 In due course, Germany’s NetzDG will accordingly give way to DSA’s revised regulatory framework.
III. Online Governance in North America
Compared with the predominant “notice-and-takedown” model, the United States and Canada have adopted different approaches that highlight many of the emerging challenges of global Internet regulation.
1. United States of America’s “market self-regulation” model
A further online regulatory model is “market self-regulation”, which is canonically associated with the United States of America. This model represents a fundamentally different approach to regulating online content, and symbolizes deep disagreement on the constitutional role of freedom of expression in democratic nations.31
This model has two main elements. The first is that digital platforms are protected from civil liability for offensive speech acts under section 230 of the Communications and Decency Act (CDA).32 Congress initially passed section 230 to protect online platforms from state liability for speech torts, the operative language being: “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”.33 American courts have since held that section 230 not only protects digital intermediaries against defamation liability, but more broadly against claims based on third-party content such as “[…] negligence; deceptive trade practices, unfair competition, and false advertising; the common-law privacy torts; tortious interference with contract or business relations; intentional infliction of emotional distress; and dozens of other legal doctrines”.34 According to leading Internet attorneys, this broad safe harbor represents “the cornerstone of a functioning Internet”.35
The second element of the “market self-regulation” model is an enlarged scope of protection for offensive online speech – including hate speech – under the First Amendment to the United States Constitution.36 Compared to EU “regulated self-regulation”,37 the primary method by which free-speech encroachments are made is by modifying digital platforms’ content moderation policies, or Terms of Use. Importantly, while our digital media environment has freed speakers from dependence on older gatekeepers epitomized by editorial processes of print journalism, the shift from a “broadcasting” to a “participatory” communication model has introduced a new, highly interactive communication entity – the digital platform.38 Whether in the EU or North America, our increasing reliance on these new gatekeepers has proven to be highly problematic.39 Offering states and private actors not only new opportunities for control and surveillance,40 these digital platforms engender unprecedented and unforeseen tensions between their business models and duties to respect fundamental and human rights, a phenomenon that has recently crystallized in America.41
Constitutional challenges and critiques
Many of these issues are now being litigated before the US Supreme Court in Moody v NetChoice, LLC.42 Recently, over 100 bills have been proposed in state legislatures purporting to regulate social media platforms’ content moderation policies.43 On 21 September 2022, the Attorney General for the State of Florida petitioned the US Supreme Court for a writ of certiorari to review a judgment of the Eleventh Circuit Court of Appeals, which declared significant portions of Florida’s new common carrier free speech statute unconstitutional.44 In Senate Bill 7072,45 Florida sought to regulate the “unlawful acts and practices” of social media platforms in censoring political and dissenting content by requiring them to divulge the how and why of their censorship decisions, and to host speech that they otherwise would not. Specifically, as to disclosure, the Florida Act requires platforms to “[…] publish the standards, including detailed definitions, it uses or has used for determining how to censor, deplatform, and shadow ban”.46 As to mandatory hosting rules, the Act leaves social media platforms free to adopt otherwise lawful content- and viewpoint-discriminatory standards, but requires them to apply whatever “[…] censorship, deplatforming, and shadow banning standards in a consistent manner among its users […]”.47 Evidencing the great importance of this case, many other states remain “waiting in the wings”, as evidenced by the multi-jurisdictional Amicus brief filed in support of Petitioner, State of Florida.48
Although both parties joined issue on granting leave to appeal, the main disputed questions raised for consideration in Moody include:
Whether hosting on a digital platform constitutes “speech” or “editorial discretion”;
Whether a censorship right can be extracted from the First Amendment;
Whether digital platforms can or should be regulated as “common carriers”;49
Whether Congress authorized platforms to engage in content- and viewpoint-based discrimination under section 230 CDA;
Whether the Dormant Commerce Clause and section 230 CDA are preemptive.
Perhaps most interestingly, Columbia Law Professor Philip Hamburger filed an Amicus brief urging the US Supreme Court to proceed cautiously in the light of two deficiencies in the appeal record. First, Hamburger keenly observed that by applying for a preliminary injunction against enforcement of the Florida Act before suffering actual harm, the platforms framed their lawsuit “[…] in a posture that leaves the speech rights of ordinary Americans unrepresented”.50 Second, and related, the case arose on an appeal record devoid of discovery evidence “[…] on the depth of government involvement in the censorship” attributed to digital platforms alone.51 According to Hamburger, this missing evidence is “crucial” because “[i]t confirms […] that the case is centrally about the free speech of individuals, whose rights are not represented”,52 and it demonstrates “[…] the compelling need for common carrier laws, such as the Florida and Texas free speech statutes, to prevent government from privatizing its censorship”.53 The absence of a full evidentiary record of privatized government censorship is made all the more worrisome given Hamburger’s conviction that “[t]he jurisprudence of this Court has yet to catch up with the realities of how government uses private organizations to violate constitutional rights with impunity”.54
In the end, if the US Supreme Court takes up these challenges in Moody, the law of Internet regulation is likely to be changed materially, not only for the United States, but worldwide.
2. Canada’s “hybrid” regulatory model
Compared to the EU and the United States of America, Canada has embraced a more consultative, “multi-stakeholder” approach to online harms. Currently awaiting statutory implementation of advice provided by experts composed of specialists in platform governance, content regulation, civil liberties, tech regulation, and national security, Canada’s government has avoided a fixed timeframe for its new regulatory framework, vowing instead to take whatever time necessary to meet the challenge of “[…] getting the legislation right”.55
Bill C-36, “technical discussion paper”, and expert consultations
Canada’s most recent hate speech legislation was introduced in 2021. Called Bill C-36,56 it aimed to amend the Canadian Human Rights Act to make it a discriminating practice “[…] to communicate or cause to be communicated hate speech by means of the Internet or other means of telecommunications […]”.57 Besides exempting private online communications,58 the proposed amendments – like the American “market self-regulation” model – included extensive safe harbors for digital platforms.59 Subsection 13(4), for example, excluded certain “telecommunications service providers’ from its definition of ‘communication of hate speech”,60 and subsection 13(7) exempted the Bill’s application to “online communication service providers” altogether.61 Combined with an equivocal definition of “hate speech”,62 the Bill left potential victims of online harms with a limited and ineffective range of quasi-judicial remedies, including cease and desist orders and more conventional awards of compensatory and punitive damages.63 Despite its aim of providing an “important part” of Canada’s online regulatory framework, Bill C-36 was interrupted by the 2021 federal election, and has since stalled at first reading in Canada’s House of Commons.64
Along with Bill C-36, the Canadian government presented a “technical discussion paper” as part of its proposed regulatory framework,65 which provides greater clues as to the country’s regulatory goals. Borrowing a page from Germany’s NetzDG, it endorsed a mandatory 24-hour takedown requirement for harmful content, backstopped by a federal “last resort” power to block non-compliant digital platforms. Additional aspects included:66
Compelling platforms to provide data on algorithmic filtering and blocking, including rationales for acting on flagged posts;
Obliging websites to employ better means for identifying and alerting authorities of illegal content, including preserving user data for future legal action;
Creating a new system for appealing platforms’ content moderation decisions;
Employing severe sanctions for non-compliance, including fining companies up to five percent of their global revenue, or $25 million, whichever is higher.
Finally, a new “Digital Safety Commission of Canada” was proposed, which would preside over this regulatory environment with powers – similar to EU’s DSA – to issue binding “takedown” orders to online platforms.
Responding to concerns that this proposal did not properly respect freedom of expression,67 politicians announced plans to go back to the proverbial drawing board. Mindful of the ever-increasing complexities of online regulation, government officials proceeded on the basis that future regulations would not be a “panacea” for rectifying offensive content, but would be only “one piece of a bigger puzzle”.68
After convening an expert panel in 2022, some of its chief proposals for Canada’s revised framework were that the legislation should:69
emphasize risk management and human rights protections, and be flexible and adaptable to avoid becoming quickly obsolete;
incorporate strong commitments to digital literacy and public education;
establish clear consequences for non-compliance;
consider systemic biases and harm associated with bots, algorithms, and AI;
incorporate a suitable process for appealing content moderation decisions.
At last, as reflected by the growing regulatory heterogeneity described above, Canada’s expert panel disagreed on several vital issues, such as the definition of “harmful content”, mandatory content removal, the suitability of a 24-hour takedown requirement, the need for and feasibility of an independent review body, proactive or general platform monitoring, mandatory reporting to law enforcement authorities, platform immunity for speech torts, tailoring regulatory obligations to platform size or risk, and the way to deal with fake news and disinformation.70
IV. Mounting Regulatory Tensions
From a comparative perspective, European and North American responses to harmful online content provide valuable insights into the nature and scope of regulatory challenges worldwide. First, the unique structural features of our digital media environment raise significant and unanticipated epistemological concerns for online regulators, requiring a new paradigm for bringing together a multitude of variables into an enhanced understanding of our online world. Second, differences in human rights protection and constitutional structure present difficult challenges for online regulators, requiring keen sensitivity to political and constitutional contexts for optimizing regulatory calibration. Third, the unprecedented power of digital platforms incentivizes privatized government censorship which, along with existing economic incentives driving platform censorship, increasingly imperils the quality and quantity of public discourse.
1. Digital media ecology and medico-legal integration
a) Restructured media ecology
The advent of the Internet and social media has triggered a seismic shift in our contemporary media ecology,71 transferring human discourse production onto a new medium and drastically altering its structure and dynamics. This transfer of ever greater portions of our lives online has given rise to many unanticipated epistemological concerns.72 From the emergence of augmented and virtual reality, and the looming prospect of an all-encompassing Metaverse,73 to the dangers of “link rot” (i.e. hyperlinks ceasing to work) and the weakening of humanity’s knowledge base,74 we are seeing a rapid intensionalization of our infosphere.75 In less than a generation, humanity has effectively rewritten nature’s code.
This “digital town square” raises many regulatory challenges. The US Supreme Court has sensibly accepted that when deciding free speech cases, it does “[…] not mechanically apply [a] rule used in the pre-digital era” to technology of today.76 In Biden v Knight First Amendment Institution at Columbia University,77 a recent case involving President Donald Trump’s Twitter conduct, Justice Clarence Thomas wrote a thoughtful concurring opinion that may well influence future thinking on regulating digital platforms. Besides endorsing anti-discriminatory common carrier laws, he stressed that the principal difficulty of platform regulation is that “[…] applying old doctrines to new digital platforms is rarely straightforward”.78 As evidenced by the pending litigation in Moody, Justice Thomas rightly predicted that the Court “[…] will soon have no choice but to address how our legal doctrines apply to highly concentrated, privately owned information infrastructure such as digital platforms”.79
Importantly, the full extent of risks posed by our modern free speech infrastructure is gradually being revealed. Besides acknowledging that our jurisprudence has yet to catch up with our digital media ecology more broadly, courts and legislatures are only now beginning to heed the admonitions of legal scholars who have long warned of increasing privatization of government censorship. Over a decade ago, Professor Jack Balkin cautioned that so-called “new-school” regulatory techniques associated with modern digital media – which include controlling digital networks and auxiliary services like search engines, payment systems, and advertisers – present heightened risks of government co-optation and censorship of private owners of our global media infrastructure.80 Accompanied by rising awareness that “[p]latform control means content control”,81 Balkin cautioned that our contemporary media environment effectively functions as an “[…] ingenious system of private prior restraint [that] achieves all of the cost- and burden-shifting effects of traditional prior restraint without the need for an official government licensing system or a judicial injunction”.82 Given mounting evidence that public discourse is now subject to “[…] the most extensive system of censorship in […] history”,83 there is an urgent need for new ideas and paradigms to assist in formulating effective “[…] structural obstacles to the privatization of censorship”.84
b) Insights from theoretical biology and medicine
Reconciling dislocations between old legal doctrine and new media requires restructuring and reordering the relations between affected stakeholders in our new digital environment. As anticipated by legal realists, medical science may provide valuable insights for formulating a more integrative model of Internet regulation.
Consistent with earlier trends towards intellectual synthesis embraced by Justices Holmes and Cardozo, in 1993 molecular biologist Professor Richard Strohman thoughtfully explored the possibility of a growing crisis in medical science and theoretical biology.85 While admitting that cellular mechanisms were amply understood, Strohman argued that medicine’s dominant model of genetic determinism – that complex human diseases and behaviors are reducible to purely genetic influences – was increasingly unable to contend with newer findings of biological complexity, necessitating a new and more comprehensive theory of living systems. This urgency for developing a new medical paradigm was noted earlier by Dr George Engel.86 In Engel’s view, medicine was in crisis because of its adherence to a disease model that was no longer adequate for the profession’s scientific tasks and social responsibilities. Notably, while medical education had grown increasingly proficient in conveying to physicians sophisticated scientific knowledge about the body and its abnormalities, it had failed to give corresponding attention to the psychological and social aspects of illness and treatment.
At their respective levels of abstraction, Engel and Strohman questioned emerging trends towards biological reductionism and elementalism that have since come of age in our modern era. In their place, they argued for a new “biopsychosocial” paradigm, a transactional, holistic, analogical, and probabilistic approach to health and disease reflecting mounting evidence that “[…] the pathogenesis of disease involves a series of negative and positive feedbacks with multiple simultaneous and sequential changes potentially affecting any system of the body”.87 Among its implications, this model required physician-lawyers to explore complex relationships between social stress and bodily experience, to study how the corporealization of cultural experience occurs, and to determine our adaptive limits to environmentally-determined stressors.
Perhaps most importantly, this new medical model implicated physicians in wider political debates from which the current conceptualization of disease might have insulated them, a point illustrated analogously by containing the tensions and challenges of global Internet governance within the rubric of more conventional methods and approaches to digital media regulation.
2. Fundamental rights protection and constitutional structure
As in Engel’s biopsychosocial paradigm, a renewed commitment to intellectual synthesis in our Internet governance era requires that we include a broader array of factors impacting digital media regulation. As seen above, two additional comparative law factors are differences in fundamental rights protection, and variances in constitutional structure.
One of the most troubling aspects of global Internet regulation is the considerable variation in free speech protection. Although DSA purports to be a “complete code” for all 27 EU Member States, not only does hate speech remain undefined, but there exists an increasing overlap with established public libel principles protecting speech that “[…] offend[s], shock[s], or disturb[s] the State or any sector of the population”.88 Perhaps most worryingly, the US Constitution protects an enlarged scope of “offensive” speech under the First Amendment, including hate speech.89 Much of what DSA intends to regulate as “illegal content” is constitutionally protected in America, a problem exacerbated by the “all-or-nothing” nature of platform posting. Moreover, regardless of jurisdiction, digital intermediaries continue to exclude categories of problematic speech by modifying their subscribers’ Terms of Use in potentially violable ways. Globally, we are confronting profound regulatory dilemmas about striking an appropriate balance between individuals’ interests in free speech, and maintaining a robust and functional public sphere.
The second unsettling aspect of global Internet regulation is discrepancies in constitutional structure. Even if we could reconcile differences in global free speech protection, successful regulatory calibration requires responding to varying political and constitutional designs, a process heavily dependent upon comparative methods. Recent comparative law scholarship establishes that changes in presidential and parliamentary governments, federal and unitary structures, mechanisms of legislative scrutiny, electoral systems, and the nature and extent of judicial review all have well-documented influences on regulatory dynamics in modern democracies.90 The emergent field of public accountability scholarship has further shown that established democracies have institutionalized a broad array of accountability mechanisms, which interrelate and have important aggregate effects, especially on holding power to account.91 These insights are particularly relevant given the underreported effects of our digital media ecology on the promotion and privatization of government censorship.
In the end, given the vast number of moving parts in online regulation, any “one-size-fits-all” approach or premature attempts at “harmonization” would appear to be structurally unsound.
3. Economic and political bases of digital censorship
Perhaps the most important aspect of global online regulation is the economic motives of digital platforms themselves. Consider the operation of today’s digital marketplace. As a rule, our networked economy’s basic structure incentivizes digital intermediaries to make their platforms a welcome place and experience. Naturally, “[t]he goal is to attract and retain as many [online] users as possible”.92 Economic success, then, is a function of acceptance and community norms – what sells will be what the community deems desirable. As explained by Peters and Johnson, “[…] if community norms dictate that certain speech does not sell (i.e., its presence deters individuals from using a platform), that speech is not likely to survive […]”.93 If left to the market, platforms will not long tolerate speech that damages their commercial interests. Importantly, speech that might brook disagreement or start an argument – speech that might “offend”, “shock”, or “disturb”, for instance – is unlikely to be “liked”, “shared”, or otherwise promoted by users and intermediaries. As measured by the click-through advertising rates of online users,94 the main regulatory challenge conventionally linked with this business model is that it often conflicts with human rights norms, particularly freedom of expression. This proclivity of digital platforms to censor otherwise protected speech in their Terms of Use – even under the First Amendment of the US constitution – speaks to the power of the economic motives driving the increasing phenomena of overfiltering and overblocking.
Besides encouraging filtering and blocking of “problematic” content, these technological and economic forces ultimately manifest in deeper structural threats to democracy. As cautioned by Professor Balkin in 2012, digital platforms that rely on advertising and online payment systems are increasingly induced to install filters and to continually police and remove “problematic” content. Besides exposing online users to an endless algorithmic selection of “bias-affirming materials that by turns soothe and provoke” further online engagement,95 we are only now confronting the possibility that the effective aim and result of our digital free speech infrastructure was “[…] to induce companies to engage in collateral censorship […]”.96 As stressed by Hamburger in the Moody litigation presently before the US Supreme Court (see above III.1.), whether censoring “[…] academic papers, reports of medical cases, passionate disagreements, moderate colloquies, videos, and cartoons”,97 because governments around the world have taken strong positions, particularly on issues of science and medicine, “[…] the censorship of dissenting views on these matters is the suppression of political opposition”.98 As a result, serious threats to public discourse remain largely concealed, and thus more difficult to diagnose and regulate.
V. Conclusion
In many ways, regulatory responses to ever-rising threats of offensive online content reflect well-intentioned, but hasty attempts to saddle the law with the burden of tasks that have had increasingly little to do with its existing methods, instruments, and theories. As argued in this article, limitations associated with our online regulatory frameworks can be better understood – perhaps mitigated, or even avoided altogether – by integrating pertinent insights from the natural and medical sciences. Foremost among these insights has been adopting a new scientific paradigm to bring together a multitude of variables into an enhanced understanding of our online world. Inspired by Engel’s biopsychosocial model, attempts to explain a complex phenomenon, such as harmful online content and its legal regulation, necessitate comprehensive investigations of socio-political levels of abstraction for clues as to its dysfunctions. In retrospect, earlier application of these insights might have invited difficult questions about the nature of digital intermediaries and their economic interests, including their relationship to the unique technological structure of our digital public sphere.
Such a systems-inspired approach may have even avoided the largely unexplored regulatory dichotomy that persists to this day. Whether employing a “notice-and-takedown” or “market self-regulation” model, we have yet to face squarely the possibility that the more we focus on regulating “offensive speech”, the deeper we entrench the technical infrastructure supporting privatization of government censorship. Among the many takeaways from Professor Hamburger’s admonitions is that by neglecting systematic censorship worldwide, we may be fighting only symptoms of online disease, not its structural causes. Incorporating mounting evidence of the economic incentives driving digital platforms thus has vital diagnostic and prescriptive value. While lending credibility to allegations of privatized government censorship, it also strengthens the case for adopting common carrier legal principles, or other structurally effective barriers to privatized censorship. By restricting our frame of reference to speech rights and offensive content – regardless of regulatory model – we may be “looking through the wrong end of the telescope”, and missing an important opportunity to perhaps cure what really ails us – before it is too late.
In the end, as evidenced by the growing epistemic, technical, economic, and politico-legal challenges of digital media regulation, our best prospect for their reconciliation will be exercising our increasingly untapped capacity for intellectual synthesis that our forebears seem to have understood more acutely in the past. For whatever else it may do, it must inevitably result in healthy criticism, wider views, new fields of research, and greater activity on the part of those interested in questioning why, in our modern age of unprecedented wealth and technological advancement, more civil and open public discourse does not prevail.
R. A. Posner, The Essential Holmes: Selections from the Letters, Speeches, Judicial Opinions, and Other Writings of Oliver Wendell Holmes Jr, 1992, p. 184.↩︎
B. Cardozo, “Anniversary Discourse: What Medicine Can Do for Law”, (1929) 5 Bulletin of the New York Academy of Medicine, 581.↩︎
R. Pound, “The Lawyer as a Social Engineer”, (1954) 3 Journal of Public Law, 292. Like Cardozo, Pound encouraged lawyers to heed insights from other disciplines to become effective “social engineers” committed to real-world problem solving. See also J. Pope, “The Unfolding Unity” (1954) 3 Journal of Public Law, 319.↩︎
B. Cardozo, op. cit. (n. 2), p. 583.↩︎
B. Cardozo, op. cit. (n. 2), p. 583.↩︎
B. Cardozo, op. cit. (n. 2), p. 584. On medico-legal integration, see also J. M. Gibson and R. L. Schwartz, “Physicians and Lawyers: Science, Art, and Conflict”, (1980) 6 American Journal of Law & Medicine, 173; H. W. Smith, “Integration of Law and Medicine”, (1963) 14 Syracuse Law Review, 550; H. W. Smith, “Scientific Proof and Relations of Law and Medicine”, (1943) 10 University of Chicago Law Review, 243. Predating Justice Cardozo’s proposal, the German physician Rudolf Virchow famously referred to physicians as “natural attorneys of the poor”, once stating that “medicine is a social science, and politics is nothing else but medicine on a large scale”. See R. Virchow, “Der Armenarzt”, (1848) 18 Die Medicinische Reform 125, 125.↩︎
H. W. Smith, op. cit. (n. 6), p. 555.↩︎
Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act) [2022] OJ LI277/1.↩︎
For Canadian regulations, see Bill C-36, An Act to amend the Criminal Code and the Canadian Human Rights Act and to make related amendments to another Act (hate propaganda, hate crimes and hate speech), 2nd Sess, 43rd Parl, 2020–2021, ss 12–13 (first reading 23 June 2021). See also Department of Canadian Heritage, “The Government’s Commitment to Address Online Safety”, Government of Canada, <https://www.canada.ca/en/canadian-heritage/campaigns/harmful-online-content.html> accessed 10 March 2023.↩︎
I. Illich, Medical Nemesis: The Expropriation of Health, 1975, p. 165. In Medical Nemesis, “iatrogenesis” was coined to describe the causation of disease or harmful complications attributable to human or medical activity, including diagnosis, intervention, error, or negligence. Illich referred to three distinct but interrelated forms of iatrogenesis operative at progressively higher levels of abstraction—clinical, social, and structural.↩︎
Gesetz zur Verbesserung der Rechtsdurchsetzung in sozialen Netzwerken (Netzwerkdurchsetzungsgesetz – NetzDG) from 1 September 2017 (BGBl I p 3352).↩︎
Digital Services Act (n 8).↩︎
For criticisms of NetzDG, see J. Mchangama, „The War on Free Speech: Censorship’s Global Rise“, (2022) 101 Foreign Affairs 117, 123–24; J. Rinceanu, „Menschenrechte in der digitalen Krise“ in M. Engelhart and H. Kudlich and B. Vogel (eds.), Digitalisierung, Globalisierung und Risikoprävention: Festschrift für Ulrich Sieber zum 70. Geburtstag, 2021, p. 831. See also H. Tworek and P. Leerssen, “An Analysis of Germany’s NetzDG Law”, (2019) First session of the Transatlantic High Level Working Group on Content Moderation Online and Freedom of Expression.↩︎
Under §§ 3(2)(2)–(3) NetzDG, “manifestly unlawful” content must be deleted or blocked within 24 hours, whereas all other “unlawful content” within seven days upon receipt of a complaint.↩︎
§ 1(3) NetzDG.↩︎
§ 3(2)(5) NetzDG.↩︎
§ 2(1) NetzDG.↩︎
§ 3a NetzDG.↩︎
Many nations, like Ethiopia, Pakistan, Turkey, Russia, Belarus, Mali, Morocco, Nigeria, Cambodia, Indonesia, and Kyrgyzstan, that followed NetzDG’s example, are flawed democracies or authoritarian states that do not have Germany’s rule of law safeguards and free speech protections. See J. Mchangama and N. Alkiviadou, “The Digital Berlin Wall: How Germany (Accidentally) Created a Prototype for Global Online Censorship – Act Two”, 2020, p. 21.↩︎
According to Art. 93(2), DSA shall apply from 17 February 2024.↩︎
Art. 1 DSA.↩︎
DSA distinguishes three categories of digital intermediary services, namely, conduit, caching, and hosting. See Art. 3(g) DSA.↩︎
Art. 74 DSA and Recital 117. Fines not exceeding 6% of total corporate revenue can be imposed on “very large” platforms and search engines under DSA.↩︎
Provisions on the “notice-and-action” mechanism are anchored in Art. 16 et seq. DSA.↩︎
Art. 22 DSA.↩︎
Art. 17 DSA.↩︎
Art. 17(3)(f) DSA.↩︎
Art. 18 DSA.↩︎
Art. 8 DSA.↩︎
Recital 9 DSA.↩︎
See generally G. Frosio (ed.), The Oxford Handbook of Online Intermediary Liability, 2020.↩︎
Communications Decency Act, 47 USC § 230 (1996). See generally E. Goldman, “An Overview of The United States”, in G. Frosio, op. cit. (n. 6), p. 155.↩︎
CDA, op. cit. (n. 32), § 230(c)(1) (emphasis added).↩︎
E. Goldman, “Why Section 230 is Better than the First Amendment”, (2019) 95 Notre Dame Law Review Reflection, 33, 37.↩︎
See M. Ammori, “The ‘New’ New York Times: Free Speech Lawyering in the Age of Google and Twitter” (2014) 127 Harvard Law Review, 2259, 2287.↩︎
US Const Amend I. The First Amendment reads: “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press, or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances”. See also J. Kamatali, “Limits of the First Amendment: Protecting American Citizens’ Free Speech in the Era of the Internet and the Global Marketplace of Ideas”, (2015) 33 Wisconsin International Law Journal, 587. The US Supreme Court has exempted numerous categories of speech from First Amendment protection, including: Obscenity, fighting words, defamation, child pornography, fraud, and incitement to imminent lawless action. See V. L. Killion, Congressional Research Service, IF11072, The First Amendment: Categories of Speech (2019).↩︎
See e.g. H. J. Kleinsteuber, “The Internet between Regulation and Governance”, in: C. Möller and A. Amouroux (eds.), The Media Freedom Internet Cookbook, 2004, p. 61, 63.↩︎
See generally R. K. Logan, Understanding New Media: Extending Marshall McLuhan, 2d ed., 2016, pp. 27-74.↩︎
See e.g., J. Peters and B. Johnson, “Conceptualizing Private Governance in a Networked Society”, (2016) 18 North Carolina Journal of Law and Technology, 15.↩︎
See e.g. J. M. Balkin, “Old-School/New-School Speech Regulation”, (2014) 127 Harvard Law Review, 2296; J. M. Balkin, “The First Amendment is an Information Policy”, (2012) 41 Hofstra Law Review, 1. See also L. DeNardis, The Global War for Internet Governance, 2014, p. 17, who also emphasises the fundamental importance of the Internet’s free speech infrastructure.↩︎
See eg R. L. Weaver, From Gutenberg to the Internet: Free Speech, Advancing Technology, and the Implications for Democracy, 2nd ed., 2019.↩︎
See US Supreme Court, 21 September 2022, Moody v NetChoice, LLC, No. 22-277; NetChoice, LLC v Moody 34 F4th 1196 (11th Cir 2022), aff’g No 4:21-cv-00220 (ND Fla 2021).↩︎
D. Harwell, “Jan. 6 Twitter witness: Failure to curb Trump spurred ‘terrifying’ choice” The Washington Post <https://www.washingtonpost.com/technology/2022/09/22/jan6-committee-twitter-witness-navaroli/> accessed 11 March 2023.↩︎
Brief for Petitioner, US Supreme Court, 21 September 2022, Moody v NetChoice, LLC, No. 22-277.↩︎
Florida Statutes § 501.2041 (Florida Act).↩︎
Op. cit. (n. 45), § 501.2041(2)(a).↩︎
Op. cit. (n. 45), § 501.2041(2)(b) (emphasis added).↩︎
Brief of Amici Curiae States of Ohio, Alabama, Alaska, Arizona, Arkansas, Idaho, Iowa, Kentucky, Mississippi, Missouri, Montana, Nebraska, South Carolina, Tennessee, Texas, and Utah in Support of Petitioners, US Supreme Court, 21 September 2022, Moody v NetChoice, LLC, No. 22-277. Petitioners reported that, at last count, “lawmakers in 34 states” are considering laws regulating social media platforms to prevent unfair censorship. See, Brief for Petitioner, op. cit. (n. 44), p. 13.↩︎
For thorough analyses of common carrier laws and their ability to counter social media platforms leveraging economic might into enhanced political power and censorship, see G. M. Dickinson, “Big Tech’s Tightening Grip on Internet Speech”, (2022) 55 Indiana Law Review, 101; E. Volokh, “Treating Social Media Platforms like Common Carriers?”, (2021) 1 Journal of Free Speech Law, 377; G. Lakier, “The Non-First Amendment Law of Freedom of Speech”, (2021) 134 Harvard Law Review, 2299.↩︎
Brief of Professor P. Hamburger as Amicus Curiae in Support of Neither Party, US Supreme Court, 21 September 2022, Moody v NetChoice, LLC, No. 22-277, p. 3 (emphasis added).↩︎
Brief of Professor P. Hamburger, op. cit. (n. 50).↩︎
Brief of Professor P. Hamburger, op. cit. (n. 50), p. 9.↩︎
Brief of Professor P. Hamburger, op. cit. (n. 50).↩︎
Brief of Professor P. Hamburger, op. cit. (n. 50), p. 16-17. See also K. Langvardt, “Regulating Online Content Moderation”, (2018) 106 Georgetown Law Journal, 1353, 1355.↩︎
See generally R. Aiello, “Where does the Liberal promise to address harmful online content stand?”, CTVNews.ca, <https://www.ctvnews.ca/politics/where-does-the-liberal-promise-to-address-harmful-online-content-stand-1.6048720> accessed 11 March 2023.↩︎
See Bill C-36, op. cit. (n. 9), ss 12–23.↩︎
Bill C-36, op. cit. (n. 9), ss 13(1).↩︎
Bill C-36, op. cit. (n. 9), ss 13(5).↩︎
Bill C-36, op. cit. (n. 9), ss 13(4), 13(7).↩︎
Bill C-36, op. cit. (n. 9), ss 13(4). This subsection exempted “telecommunications services providers” as defined in subsection 2(1) of Canada’s Telecommunications Act.↩︎
Bill C-36, op. cit. (n. 9), ss 13(7).↩︎
Bill C-36, op. cit. (n. 9), ss 13(9), 13(10).↩︎
Bill C-36, op. cit. (n. 9), s 19.↩︎
Bill C-36, op. cit. (n. 9).↩︎
Department of Canadian Heritage, “The Government’s Commitment to Address Online Safety: Technical Paper”, Government of Canada, <https://www.canada.ca/en/canadian-heritage/ campaigns/harmful-online-content/technical-paper.html> accessed 11 March 2023.↩︎
Department of Canadian Heritage, op. cit. (n. 65).↩︎
M. Geist, “Tracking the Submissions: What the Government Heard in its Online Harms Consultation (Since It Refuses to Post Them)”, MichaelGeist.ca, <https://www.michaelgeist.ca/2021/10/tracking- the-submissions-what-the-government-heard-in-its-online-harms-consultation-since-it-refuses-to-post-them/> accessed 11 March 2023.↩︎
R. Aiello, op. cit. (n. 55).↩︎
Department of Canadian Heritage, “Expert Advisory Group: Concluding Workshop Summary”, Government of Canada, <https://www.canada.ca/en/canadian-heritage/campaigns/harmful-online-content/concluding-summary.html> accessed 11 March 2023.↩︎
Department of Canadian Heritage, op. cit. (n. 69).↩︎
See L. Floridi, The Fourth Revolution: How the Infosphere is Reshaping Human Reality, 2014. The ubiquity and vital role of social media has prompted the US Supreme Court to declare it “the modern public square”. See US Supreme Court, Packingham v North Carolina 137 S Ct 1730, 1732 (2017).↩︎
See L. Floridi, “Soft Ethics and the Governance of the Digital”, (2018) 31 Philosophy and Technology, 1. Floridi contends that we no longer live online or offline—but onlife.↩︎
See M. Ball, “Framework for the Metaverse: The Metaverse Primer”, MatthewBall.vc, 29 June 2021, <https://www.matthewball.vc/all/forwardtothemetaverseprimer> accessed 11 March 2023.↩︎
J. Zittrain, “The Internet is Rotting: Too much has been lost already. The glue that holds humanity’s knowledge together is coming undone”, The Atlantic, <https://www.theatlantic.com/technology/archive/2021/06/the-internet-is-a-collective-hallucination/619320/> accessed 11 March 2023.↩︎
Even in the 1960s and 1970s, Marshall McLuhan foresaw the next era in communications technology leading to an epistemic “hyperreality” not unlike the Metaverse. See R. K. Logan, op. cit. (n. 38), p. 46.↩︎
US Supreme Court, Riley v California 573 US 373 (2014), 406–07 (Alito J, concurring).↩︎
US Supreme Court, Biden v. Knight First Amendment Institute, 141 S Ct 1220 (2021) (Thomas J, concurring).↩︎
US Supreme Court, op. cit. (n. 77), 1221.↩︎
US Supreme Court, op. cit. (n. 77).↩︎
See J. M. Balkin, op. cit. (n. 40).↩︎
A. Tutt, “The New Speech”, (2014) 41 Hastings Constitutional Law Quarterly, 235, 249.↩︎
J. M. Balkin, op. cit. (n. 40), p. 2326.↩︎
Brief of Professor P. Hamburger, op. cit. (n. 50), p. 25.↩︎
Brief of Professor P. Hamburger, op. cit. (n. 50), p. 16 (emphasis added).↩︎
R. C. Strohman, “Ancient Genomes, Wise Bodies, Unhealthy People: Limits of a Genetic Paradigm in Biology and Medicine”, (1993) 37 Perspectives in Biology and Medicine, 112, 112.↩︎
G. L. Engel, “A Unified Concept of Health and Disease”, (1960) 3 Perspectives in Biology and Medicine, 459; G. L. Engel, “The Need for a New Medical Model: A Challenge for Biomedicine”, (1977) 196 Science, 129.↩︎
G. L. Engel, “A Unified Concept of Health and Disease”, op. cit. (n. 86), p. 485.↩︎
ECtHR, 7 December 1976, Handyside v United Kingdom, 1 EHRR 737 [49] (emphasis added). See also ECtHR, 8 July 1986, Lingens v Austria, 8 EHRR 407; ECtHR, 23 April 1992, Castells v Spain, App. no. 11798/85. For recent commentary on this point, see J. Mchangama and N. Alkiviadou, “Hate Speech and the European Court of Human Rights: Whatever Happened to the Right to Offend, Shock or Disturb?”, (2021) 21 Human Rights Law Review, 1008.↩︎
See generally J. Kamatali, op. cit. (n. 36).↩︎
See e.g., R. Stephenson, A Crisis of Democratic Accountability: Public Libel Law and the Checking Function of the Press, 2018.↩︎
See e.g., M. Bovens and others (eds.), The Oxford Handbook of Public Accountability, 2014; K. Strøm and others (eds.), Delegation and Accountability in Parliamentary Democracies, 2008; R. Mulgan, Holding Power to Account: Accountability in Modern Democracies, 2003; A. Schedler and others (eds.), The Self-Restraining State: Power and Accountability in New Democracies, 1999.↩︎
J. Peters and B. Johnson, op. cit. (n. 39), p. 65.↩︎
J. Peters and B. Johnson, op. cit. (n. 39), p. 65.↩︎
M. Lavi, “Content Providers’ Secondary Liability: A Social Network Perspective”, (2016) 26 Fordham Intellectual Property Media & Entertainment Law Journal, 855, 935–36 fn. 316.↩︎
K. Langvardt, “A New Deal for the Online Public Sphere”, (2018) 26 George Mason Law Review, 341, 358.↩︎
See J. M. Balkin, op. cit. (n. 40), p. 2324 (emphasis added).↩︎
Brief of Professor P. Hamburger, op. cit. (n. 50), p. 6.↩︎
Brief of Professor P. Hamburger, op. cit. (n. 50), p. 6 (emphasis added).↩︎