The Internet as a Speech Machine and Other Myths Confounding Section 230 Reform
I. Introduction
A robust public debate is currently underway about the responsibility of online platforms for harmful content. We have long called for this discussion,1 but only recently has it been seriously taken up by legislators and the public. The debate begins with a basic question: should platforms be responsible for user-generated content?2 If so, under what circumstances? What exactly would such responsibility look like?
At the heart of this debate is Section 230 of the Communications Decency Act of 19963 —a provision originally designed to encourage tech companies to clean up “offensive” online content. Section 230 was adopted at the dawn of the commercial internet. According to the standard narrative of its passage, federal lawmakers wanted the internet to be open and free, but they also realized that such openness risked encouraging noxious activity.4 In their estimation, tech companies were essential partners in any effort to “clean up the Internet.”5
A troubling 1995 judicial decision, however, imperiled the promise of self-regulation. In Stratton Oakmont, Inc. v. Prodigy, a New York state court ruled that any attempt to moderate content turned platforms into publishers and thus increased their risk of liability.6 Lawmakers devised Section 230 as a direct repudiation of that ruling. The idea was to incentivize, rather than penalize, private efforts to filter, block, or otherwise address noxious activity.7 Section 230 provided that incentive, securing a shield from liability for “Good Samaritans” that under- or over-filtered “offensive” content.8
Over the past two (plus) decades, Section 230 has helped secure a variety of opportunities for online engagement, but individuals and society have not been the clear winners. Regrettably, state and lower federal courts have extended Section 230’s legal shield far beyond what the law’s words, context, and purpose support.9 Platforms have been shielded from liability even when they encourage illegal action, deliberately keep up manifestly harmful content, or take a cut of users’ illegal activities.10
To many of its supporters, however, Section 230 is an article of faith. Section 230 has been hailed as “the most important law protecting internet speech” and characterized as the essential building block of online innovation.11 For years, to question Section 230’s value proposition was viewed as sheer folly and, for many, heretical.
No longer. Today, politicians across the ideological spectrum are raising concerns about the leeway provided to content platforms under Section 230.12 Conservatives claim that Section 230 gives tech companies a license to silence speech based on viewpoint.13 Liberals criticize Section 230 for giving platforms the freedom to profit from harmful speech and conduct.14
Although their assessments of the problem differ, lawmakers agree that Section 230 needs fixing. As a testament to the shift in attitudes, the House Energy and Commerce Committee held a hearing on October 16, 2019 on how to make the internet “healthier” for consumers, bringing together academics (including one of us, Citron), advocates, and social media companies to discuss whether and how to amend Section 230.15 The Department of Justice held an event devoted to Section 230 reform (at which one of us, Franks, participated) on February 19, 2020.16
In a few short years, Section 230 reform efforts have evolved from academic fantasy to legislative reality.17 One might think that we, as critics of the Section 230 status quo, would cheer this moment. But we approach this opportunity with caution. Congress cannot fix what it does not understand. Sensible policymaking depends on a clear-eyed view of the interests at stake. As advisers to federal lawmakers on both sides of the aisle, we can attest to the need to dispel misunderstandings in order to clear the ground for meaningful policy discussions.
The public discourse around Section 230 is riddled with misconceptions.18 As an initial matter, many people who opine about the law are unfamiliar with its history, text, and application. This lack of knowledge impairs thoughtful evaluation of the law’s goals and how well they have been achieved. Accordingly, Part I of this Article sets the stage with a description of Section 230—its legislative history and purpose, its interpretation in the courts, and the problems that current judicial interpretation raises. A second, and related, major source of misunderstanding is the conflation of Section 230 and the First Amendment. Part II of this Article details how this conflation distorts discussion in three ways: it assumes all internet activity is protected speech, it treats private actors as though they were government actors, and it presumes that regulation will inevitably result in less speech. These distortions must be addressed to pave the way for effective policy reform. This is the subject of Part III, which offers potential solutions to help Section 230 achieve its legitimate goals.
II. Section 230: A Complex History
Tech policy reform is often a difficult endeavor. Sound tech policy reform depends upon a clear understanding of the technologies and the varied interests at stake. As recent hearings on Capitol Hill have shown, lawmakers often struggle to effectively address fast-moving technological developments.19 The slowness of the lawmaking process further complicates matters.20 Lawmakers may be tempted to throw up their hands in the face of technological change that is likely to outpace their efforts.
This Part highlights the developments that bring us to this moment of reform. Section 230 was devised to incentivize responsible content moderation practices.21 And yet its drafting fell short of that goal by failing to explicitly condition the legal shield on responsible practices. This has led to an overbroad reading of Section 230, with significant costs to individuals and society.
A. Reviewing the History Behind Section 230
In 1996, Congress faced a challenge. Lawmakers wanted the internet to be open and free, but they also knew that openness risked the posting of illegal and “offensive” material.22 They knew that federal agencies could not deal with all “noxious material” on their own and that they needed tech companies to help moderate content. Congress devised an incentive: a shield from liability for “Good Samaritans” that blocked or filtered too much or too little speech as part of their efforts to “clean up the Internet.”23
The Communications Decency Act (CDA), part of the Telecommunications Act of 1996, was introduced to make the internet safer for children and to address concerns about pornography.24 Besides proposing criminal penalties for the distribution of sexually explicit material online, members of Congress underscored the need for private sector help in reducing the volume of “offensive” material online.25 Then-Representatives Christopher Cox and Ron Wyden offered an amendment to the CDA entitled “Protection for Private Blocking and Screening of Offensive Material.”26 The Cox-Wyden Amendment, codified as Section 230, provided immunity from liability for “Good Samaritan” online service providers that over- or under-filtered objectionable content.27
Section 230(c), entitled “Good Samaritan blocking and filtering of offensive content,” has two key provisions. Section 230(c)(1) specifies that providers or users of interactive computer services will not be treated as publishers or speakers of user-generated content.28 Section 230(c)(2) says that online service providers will not be held liable for good-faith filtering or blocking of user-generated content.29 Section 230 also carves out limitations for its immunity provisions: its protections do not apply to violations of federal criminal law, intellectual property law, the Electronic Privacy Communications Act, and, as of 2018, the knowing facilitation of sex trafficking.30
In 1996, lawmakers could hardly have imagined the role that the internet would play in modern life. Yet Section 230’s authors were prescient in many ways. In their view, “if this amazing new thing—the Internet—[was] going to blossom,” companies should not be “punished for trying to keep things clean.”31 Cox recently explained that, “the original purpose of [Section 230] was to help clean up the Internet, not to facilitate people doing bad things on the Internet.”32 The key to Section 230, Wyden agreed, was “making sure that companies in return for that protection—that they wouldn’t be sued indiscriminately—were being responsible in terms of policing their platforms.”33
B. Explaining the Judiciary’s Interpretation of Section 230
The judiciary’s interpretation of Section 230 has not squared with this vision. Rather than treating Section 230 a legal shield for responsible moderation efforts, courts have stretched it far beyond what its words, context, and purpose support.34 Section 230 has been read to immunize from liability platforms that:
knew about users’ illegal activity, deliberately refused to remove it, and ensured that those users could not be identified;35
solicited users to engage in tortious and illegal activity;36 and
designed their sites to enhance the visibility of illegal activity while ensuring that the perpetrators could not be identified and caught.37
Courts have attributed this broad-sweeping approach to the fact that “First Amendment values [drove] the CDA.”38 For support, courts have pointed to Section 230’s “Findings” and “Policy” sections, which highlight the importance of the “vibrant and competitive free market that presently exists” for the internet and the internet’s role in facilitating “myriad avenues for intellectual activity” and the “diversity of political discourse.”39 But as one of us (Franks) has underscored, Congress’s stated goals also included:
the development of technologies that “maximize user control over what information is received” by Internet users, as well as the “vigorous enforcement of Federal criminal laws to deter and punish trafficking in obscenity, stalking and harassment by means of the computer.” In other words, the law [was] intended to promote the values of privacy, security and liberty alongside the values of open discourse.40
Section 230’s liability shield has been extended to activity that has little or nothing to do with free speech, such as the sale of dangerous products.41 Consider Armslist.com, a self-described “firearms marketplace.”42 Armslist helps match unlicensed gun sellers with buyers who cannot pass background checks, buyers like domestic abuser Radcliffe Haughton.43 Haughton’s estranged wife, Zina, had obtained a restraining order against him that banned him from legally purchasing a firearm,44 but Haughton used Armslist.com to easily find a gun seller that did not require a background check.45 On October 21, 2012, he used the gun he purchased on the site to murder Zina and two of her co-workers.46 The Wisconsin Supreme Court found Armslist to be immune from liability under Section 230(c)(1), despite profiting from the illegal firearm sale that led to multiple murders.47
Extending Section 230’s immunity shield to platforms like Armslist.com, which deliberately facilitate and earn money from unlawful activity, directly contradicts the stated goals of the CDA. Armslist.com can hardly be said to “provide ‘educational and informational resources’ or contribute to ‘the diversity of political discourse.’”48 Invoking Section 230 to immunize from liability enterprises that have nothing to do with moderating online speech, such as marketplaces that connect sellers of deadly weapons with prohibited buyers for a cut of the profits, is unjustifiable.
C. Evaluating the Status Quo
The overbroad interpretation of Section 230 means that platforms have scant legal incentive to combat online abuse. Rebecca Tushnet put it well a decade ago: Section 230 ensures that platforms enjoy “power without responsibility.”49
Market forces alone are unlikely to encourage responsible content moderation. Platforms make their money through online advertising generated by users liking, clicking, and sharing content.50 Allowing attention-grabbing abuse to remain online often accords with platforms’ rational self-interest.51 Platforms “produce nothing and sell nothing except advertisements and information about users, and conflict among those users may be good for business.”52 On Twitter, for example, ads can be directed at users interested in the words “white supremacist” and “anti-gay.”53 If a company’s analytics suggest that people pay more attention to content that makes them sad or angry, then the company will highlight such content.54 Research shows that people are more attracted to negative and novel information.55 Thus, keeping up destructive content may make the most sense for a company’s bottom line.
As Federal Trade Commissioner Rohit Chopra warned in his powerful dissent from the agency’s 2019 settlement with Facebook, the behavioral advertising business model is the “root cause of [social media companies’] widespread and systemic problems.”56 Online behavioral advertising generates profits by “turning users into products, their activity into assets,” and their platforms into “weapons of mass manipulation.”57 Tech companies “have few incentives to stop [online abuse], and in some cases are incentivized to ignore or aggravate [it].”58
To be sure, the dominant tech companies have moderated certain content by filtering or blocking it.59 What often motivates these efforts is pressure from the European Commission to remove hate speech and terrorist activity.60 The same companies have banned certain forms of online abuse, such as nonconsensual pornography61 and threats, in response to lobbying from users, advocacy groups, and advertisers.62 They have expended resources to stem abuse when it has threatened their bottom line.63
Yet the online advertising business model continues to incentivize revenue-generating content that causes significant harm to the most vulnerable among us. Online abuse generates traffic, clicks, and shares because it is salacious and negative.64 Deepfake pornography sites65 as well as revenge porn and gossip sites66 thrive thanks to advertising revenue.
Without question, Section 230 has been valuable to innovation and expression.67 It has enabled vast and sundry businesses. It has led to the rise of social media companies that many people find valuable, such as Facebook, Twitter, and Reddit.
At the same time, Section 230 has subsidized platforms whose business is online abuse and the platforms who benefit from ignoring abuse. It is a classic “moral hazard,” ensuring that tech companies never have to absorb the costs of their behavior.68 It takes away the leverage that victims might have had to get harmful content taken down.
This laissez-faire approach has been costly to individuals, groups, and society. As more than ten years of research have shown, cyber mobs and individual harassers inflict serious and widespread injury.69 According to a 2017 Pew Research Center study, one in five U.S. adults have experienced online harassment that includes stalking, threats of violence, or cyber sexual harassment.70 Women — particularly women of color and bisexual women — and other sexual minorities are targeted most frequently.71
Victims of online abuse do not feel safe on or offline.72 They experience anxiety and severe emotional distress. They suffer damage to their reputations and intimate relationships as well as their employment and educational opportunities.73 Some victims are forced to relocate, change jobs, or even change their names.74 Because the abuse so often appears in internet searches of their names, victims have difficulty finding employment or keeping their jobs.75
Failing to address online abuse does not just inflict economic, physical, and psychological harms on victims — it also jeopardizes their right to free speech. Online abuse silences victims.76 Targeted individuals often shut down social media profiles and e-mail accounts and withdraw from public discourse.77 Those with political ambitions are deterred from running for office.78 Journalists refrain from reporting on controversial topics.79 Sextortion victims are coerced into silence with threats of violence, insulating perpetrators from accountability.80
An overly capacious view of Section 230 has undermined equal opportunity in employment, politics, journalism, education, cultural influence, and free speech.81 The benefits of Section 230 immunity surely could have been secured at a lesser price.82
III. Debunking the Myths about Section 230
After writing about overbroad interpretations of Section 230 for more than a decade, we have eagerly anticipated the moment when federal lawmakers would begin listening to concerns about Section 230. Finally, lawmakers are questioning the received wisdom that any tinkering with Section 230 would lead to a profoundly worse society. Yet we approach this moment with a healthy dose of skepticism. Nothing is gained if Section 230 is changed to indulge bad faith claims, address fictitious concerns, or disincentivize content moderation. We have been down this road before, and it is not pretty.83 Yes, Section 230 is in need of reform, but it must be the right kind of reform.
Our reservations stem from misconceptions riddling the debate. Those now advocating for repealing or amending Section 230 often dramatically claim that broad platform immunity betrays free speech guarantees by sanctioning the censorship of political views. By contrast, Section 230 absolutists oppose any effort to amend Section 230 on the grounds that broad platform immunity is indispensable to free speech guarantees. Both sides tend to conflate the First Amendment and Section 230, though for very different ends. This conflation reflects and reinforces three major misconceptions. One is the presumption that all internet activity is speech. The second is the treatment of private actors as if they were government actors. The third is the assumption that any regulation of online conduct will inevitably result in less speech. This Part identifies and debunks these prevailing myths.
A. The Internet as a Speech Machine
Both detractors and supporters agree that Section 230 provides online intermediaries broad immunity from liability for third-party content. The real point of contention between the two groups is whether this broad immunity is a good or a bad thing. While critics of Section 230 point to the extensive range of harmful activity that the law’s deregulatory stance effectively allows to flourish, Section 230 defenders argue that the law’s laissez-faire nature is vital to ensuring a robust online marketplace of ideas.
Section 230 enthusiast Elizabeth Nolan Brown argues that “Section 230 is the Internet’s First Amendment.”84 David Williams, president of the Taxpayers Protection Alliance, similarly contends that, “The internet flourishes when social media platforms allow for discourse and debate without fear of a tidal wave of liability. Ending Section 230 would shutter this marketplace of ideas at tremendous cost.”85 Professor Eric Goldman claims that Section 230 is “even better than the First Amendment.”86
This view of Section 230 presumes that the internet is primarily, if not exclusively, a medium of speech. The text of Section 230 reinforces this characterization through the use of the terms “publish,” “publishers,” “speech,” and “speakers” in 230(c), as well as the finding that the “Internet and other interactive computer services offer a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.”87
But the presumption that the internet is primarily a medium of speech should be interrogated.88 When Section 230 was passed, it may have made sense to think of the internet as a speech machine. In 1996, the Internet was text-based and predominantly noncommercial.89 Only 20 million American adults had internet access, and these users spent less than half an hour a month online.
But by 2019, 293 million Americans were using the internet,90 and they were using it not only to communicate, but also to buy and sell merchandise, find dates, make restaurant reservations, watch television, read books, stream music, and look for jobs.91 As Nolan Brown describes it:
the entire suite of products we think of as the internet—search engines, social media, online publications with comments sections, Wikis, private message boards, matchmaking apps, job search sites, consumer review tools, digital marketplaces, Airbnb, cloud storage companies, podcast distributors, app stores, GIF clearinghouses, crowdsourced funding platforms, chat tools, email newsletters, online classifieds, video sharing venues, and the vast majority of what makes up our day-to-day digital experience—have benefited from the protections offered by Section 230.92
Many of these “products” have very little to do with speech and, indeed, many of their offline cognates would not be considered speech for First Amendment purposes.
This is not the same thing as saying that the First Amendment does not protect all speech, although this is also true. The point here is that much human activity does not implicate the First Amendment at all. As Frederick Schauer observes, “Like any other rule, the First Amendment does not regulate the full range of human behavior.”93
The acts, behaviors, and restrictions not encompassed by the First Amendment at all — the events that remain wholly untouched by the First Amendment--are the ones that are simply not covered by the First Amendment. It is not that the speech is not protected. Rather, the entire event — an event that often involves “speech” in the ordinary language sense of the word — does not present a First Amendment issue at all, and the government’s action is consequently measured against no First Amendment standard whatsoever. The First Amendment just does not show up.94
Section 230 absolutists are not wrong to emphasize the vast array of activities now conducted online; they are wrong to presume that the First Amendment shows up for all of them.
First Amendment doctrine draws a line, contested though it might be, not only between protected and unprotected speech but between speech and conduct. As one of us (Citron) has written, “[a]dvances in law and technology . . . complicate this distinction as they make more actions achievable through ‘mere’ words.”95 Because so much online activity involves elements that are not unambiguously speech-related, whether such activities are in fact speech should be a subject of express inquiry. The Court has made clear that conduct is not automatically protected simply because it involves language in some way: “it has never been deemed an abridgement of freedom of speech or press to make a course of conduct illegal merely because the conduct was in part initiated, evidenced, or carried out by means of language, either spoken, written, or printed.”96
And even when dealing with actions sufficiently expressive to be considered speech for First Amendment purposes,97 “[t]he government generally has a freer hand in restricting expressive conduct than it has in restricting the written or spoken word.”98 When considering such conduct as wearing of black armbands,99 setting fire to the American flag,100 making financial contributions to political campaigns,101 or burning draft cards,102 the Court asks whether such acts are speech at all before turning to the question of how much, if at all, they are protected by the First Amendment.
But the conflation of Section 230 and the First Amendment short-circuits this inquiry. Intermediaries invoking Section 230’s protections implicitly characterize the acts or omissions at issue as speech, and courts frequently allow them to do so without challenge. When “courts routinely interpret Section 230 to immunize all claims based on third-party content”—including civil rights violations; “negligence; deceptive trade practices, unfair competition, and false advertising; the common law privacy torts; tortious interference with contract or business relations; intentional infliction of emotional distress; and dozens of other legal doctrines”103 —they go far beyond existing First Amendment doctrine, and grant online intermediaries an unearned advantage over offline intermediaries.104
In addition to short-circuiting the analysis of whether particular online activities qualify as speech at all, an overly indulgent view of Section 230 short-circuits the analysis of whether and how much certain speech should be protected. The Court has repeatedly observed that not all speech receives full protection under the First Amendment.105 Speech on “matters of public concern” is “‘at the heart of the First Amendment’s protection,’” whereas “speech on matters of purely private concern is of less First Amendment concern.”106 Some categories of speech, including obscenity, fighting words, and incitement, are historical exceptions to the First Amendment’s protections.107
Treating all online speech as presumptively protected not only ignores the nuances of First Amendment jurisprudence, but also elides the varying reasons why certain speech is viewed as distinctly important in our system of free expression.108 Some speech matters for self-expression, but not all speech does.109 Some speech is important for the search for truth or for self-governance, but not all speech serves those values. Also, as Kenneth Abraham and Edward White argue, the “all speech is free speech” view devalues the special cultural and social salience of speech about matters of public concern.110 It disregards the fact that speech about private individuals about purely private matters may not remotely implicate free speech values at all.
The view that presumes all online activity is normatively significant free expression protected by the First Amendment reflects what Leslie Kendrick describes as “First Amendment expansionism”— “where the First Amendment’s territory pushes outward to encompass ever more areas of law.”111 As Kendrick observes, the temptations of First Amendment expansionism are heightened “in an information economy where many activities and products involve communication.”112 The debate over Section 230 bears this out.
The indulgent approach to Section 230 veers far away not only from the public discourse values at the core of the First Amendment, but also from the original intentions of Section TA \s "Section 230" 230’s sponsors. Christopher Cox, a former Republican Congressman who co-sponsored Section 230, has been openly critical of “how many Section TA \s "Section 230" 230 rulings have cited other rulings instead of the actual statute, stretching the law,” asserting that “websites that are ‘involved in soliciting’ unlawful materials or ‘connected to unlawful activity should not be immune under Section TA \s "Section 230" 230.’”113 The Democratic co-sponsor of Section 230, now-Senator Ron Wyden, has similarly emphasized that he “wanted to guarantee that bad actors would still be subject to federal law. Whether the criminals were operating on a street corner or online wasn’t going to make a difference.”114
There is no justification for treating the internet as a magical speech conversion machine: if the conduct would not be speech protected by the First Amendment if it occurs offline, it should not be transformed into speech merely because it occurs online. Even content that unquestionably qualifies as speech should not be presumed to be doctrinally or normatively protected. Intermediaries seeking to take advantage of Section 230’s protections — given that those protections were intended to foster free speech values — should have to demonstrate, rather than merely tacitly assert, that the content at issue is in fact speech, and further that it is speech protected by the First Amendment.
B. Neutrality and the State Action Doctrine
The conflation of the First Amendment and Section 230, and internet activity with speech, contributes to another common misconception about the law, which is that it requires tech companies to act as “neutral public forums” in order to receive the benefit of immunity. Stated slightly differently, the claim here is that tech companies receive Section 230’s legal shield only if they refrain — as the First Amendment generally requires the government to refrain — from viewpoint discrimination. On this view, a platform’s removal, blocking, or muting of user-generated content based on viewpoint amounts to impermissible censorship under the First Amendment that should deprive the platform of its statutory protection against liability.115
This misconception is twofold. First, there is nothing in the legislative history or text of Section 230 that supports such an interpretation.116 Not only does Section 230 not require platforms to act neutrally vis-à-vis political viewpoints as state actors should, it urges exactly the opposite. Under Section 230(b)(4), one of the statute’s policy goals includes “remov[ing] disincentives for the development and utilization of blocking and filtering technologies.”117
Second, the “neutral platform” myth completely ignores the state action doctrine, which provides that obligations created by the First Amendment fall only upon government actors, not private actors. Attempting to extend First Amendment obligations to private actors is not only constitutionally incoherent but endangers the First Amendment rights of private actors against compelled speech.118
High-profile examples of the “neutral platform” argument include Senator Ted Cruz, who has argued that “big tech enjoys an immunity from liability on the assumption they would be neutral and fair. If they’re not going to be neutral and fair, if they’re going to be biased, we should repeal the immunity from liability so they should be liable like the rest of us.”119 Representative Greg Gianforte denounced Facebook’s refusal to run a gun manufacturer’s ads as blatant “censorship of conservative views.”120 Along these lines, Representative Louie Gohmert contended that, “Instead of acting like the neutral platforms they claim to be in order obtain their immunity,” social media companies “act like a biased medium and publish their own agendas to the detriment of others.”121
It is not just politicians who have fallen under the spell of the viewpoint neutrality myth. The Daily Wire’s former Editor-at-Large, Josh Hammer, tweeted: “It is not government overreach to demand that Silicon Valley tech giants disclose their censorship algorithms in exchange for continuing to receive CDA Sec. 230 immunity.”122
Several legislative and executive proposals endeavor to reset Section 230 to incentivize platforms to act as quasi-governmental actors with a commitment to supposed viewpoint neutrality. One example is Senator Josh Hawley’s bill, “Ending Support for Internet Censorship Act.”123 Under the Hawley proposal, Section 230’s legal shield would be conditioned on companies of a certain size obtaining FTC certification of their “political neutrality.” Under Representative Gohmert’s proposal, Section 230 immunity would be conditioned on a platform’s posting of user-generated content in chronological order. Making judgments about—in other words, moderating—content’s prominence and visibility would mean the loss of the legal shield.124 President Trump’s May 28, 2020 “Executive Order on Preventing Online Censorship,” issued after Twitter took the unprecedented step of fact-checking two Trump tweets containing false information about mail-in ballots and marking them as factually unsupported, sounded a similar theme, declaring that Section 230 “immunity should not extend beyond its text and purpose to provide protection for those who purport to provide users a forum for free and open speech, but in reality use their power over a vital means of communication to engage in deceptive or pretextual actions stifling free and open debate by censoring certain viewpoints.”125
It is important to note, first, that there is no empirical basis for the claim that conservative viewpoints are being suppressed on social media. In fact, there is weighty evidence indicating that rightwing content dominates social media. Facebook, responding to concerns about anti-conservative bias, hired former Senator John Kyl and lawyers at Covington & Burling to conduct an independent audit of potential anti-conservative bias.126 The Covington Interim Report did not conclude that Facebook had anti-conservative bias.127 As Siva Vaidhyanathan observes, there is no evidence supporting accusations that social media companies are disproportionately silencing conservative speech: the complaints are “simply false.”128 Many studies have found that conservative political campaigns have in fact leveraged social media to much greater advantage than their adversaries.129
But even if the claims of anti-conservative bias on platforms did have some basis in reality, the “neutral platform” interpretation of Section 230 takes two forms that actually serve to undermine, not promote, First Amendment values. The first involves the conflation of private companies with state actors, while the second characterizes social media platforms as public forums. Tech companies are not governmental or quasi-governmental entities, and social media companies and most online service providers are not publicly owned or operated.130 Both of these forms of misidentification ignore private actors’ own First Amendment rights to decide what content they wish to endorse or promote.
Neither Section 230 nor any judicial doctrine equates “interactive computer services” with state guarantors of First Amendment protections. As private actors, social media companies are no more required to uphold the First Amendment rights of their users than would be bookstores or restaurants to their patrons.131 As Eugene Kontorovich testified before the Senate Judiciary Committee’s hearing on “Stifling Free Speech: Technological Censorship and the Public Discourse”:
If tech platforms “engage in politically biased content-sorting . . . it is not a First Amendment issue. The First Amendment only applies to censorship by the government. . . . The conduct of private actors is entirely outside the scope of the First Amendment. If anything, ideological content restrictions are editorial decisions that would be protected by the First Amendment. Nor can one say that the alleged actions of large tech companies implicate ‘First Amendment values,’ or inhibits the marketplace of ideas in ways analogous to those the First Amendment seeks to protect against.”132
The alternative argument attempts to treat social media platforms as traditional public forums like parks, streets, or sidewalks. The public forum has a distinct purpose and significance in our constitutional order. The public forum is owned by the public and operated for the benefit of all.133 The public’s access to public parks, streets, and sidewalks is a matter of constitutional right.134 The public forum doctrine is premised on the notion that parks, streets, and sidewalks have been open for speech “immemorially . . . time out of mind.”135 For that reason, denying access to public parks, streets, and sidewalks on the basis of the content or viewpoint of speech is presumptively unconstitutional.136 But wholly privately-owned social media platforms have never been designated as “neutral public forums.”137
As one of us (Franks) has written, the attempt to turn social media controversies into debates over the First Amendment is an yet another example of what Frederick Schauer describes as “the First Amendment’s cultural magnetism.”138 It suggests that “because private companies like Facebook, Twitter, and Google have become ‘state like’ in many ways, even exerting more influence in some ways than the government, they should be understood as having First Amendment obligations, even if the First Amendment’s actual text or existing doctrine would not support it.”139 Under this view, the First Amendment should be expanded beyond its current borders.
But the erosion of the state action doctrine would actually undermine First Amendment rights, by depriving private actors of “a robust sphere of individual liberty,” as Justice Kavanaugh recently expressed it in Manhattan Cmty. Access Corp. v. Halleck.140 An essential part of the right to free speech is the right to choose what to say, when to say it, and to whom. Indeed, the right not to speak is a fundamental aspect of the First Amendment’s protections. As the Court famously held in West Virginia v. Barnette, “If there is any fixed star in our constitutional constellation, it is that no official, high or petty, can prescribe what shall be orthodox in politics, nationalism, religion, or other matters of opinion, or force citizens to confess by word or act their faith therein.”141
If platforms are treated as governmental actors or their services deemed public fora, then they could not act as “Good Samaritans” to block online abuse. This result would directly contravene the will of Section 230’s drafters.142 For instance, social media companies could not combat spam, doxing, nonconsensual pornography, or deepfakes.143 They could not prohibit activity that chases people offline. In our view, it is desirable for platforms to address online abuse that imperils people’s ability to enjoy life’s crucial opportunities, including the ability to engage with others online.
At the same time, the power that social media companies and other platforms have over digital expression should not proceed unchecked, as it does now in some respects. Currently, Section 230(c)(1)—the provision related to under-filtering content—shields companies from liability without any limit or condition, unlike Section 230(c)(2) which conditions the immunity for under-filtering on a showing of “good faith.”144 In Part IV, we offer legislative reforms that would check that power afforded platforms. The legal shield should be cabined to interactive computer services that wield their content-moderation powers responsibly, as the drafters of Section 230 wanted.145
We would lose much and gain little if Section 230 were replaced with the Hawley or Gohmert proposals, or if Trump’s Executive Order were given practical effect.146 Section 230 already has a mechanism to address the unwarranted silencing of viewpoints.147 Under Section 230(c)(2), users or providers of interactive computer services enjoy immunity from liability for over-filtering or over-blocking speech only if they acted in “good faith.” Under current law, platforms could face liability for removing or blocking content without “good faith” justification, if a theory of relief exists on which they can be sued.148
C. The Myth that Any Change to Section 230 Would Destroy Free Speech
Another myth is that any Section 230 reform would jeopardize free speech in a larger sense, even if not strictly in the sense of violating the First Amendment. Of course, free speech is a cultural as well as a constitutional matter. It is shaped by non-legal as well as legal norms, and tech companies play an outsized role in establishing those norms. We agree that there is good reason to be concerned about the influence of tech companies and other powerful private actors over the ability of individuals to express themselves. This is an observation we have been making for years—that some of the most serious threats to free speech come not from the government, but from non-state actors.149 Marginalized groups in particular, including women and racial minorities, have long battled with private censorial forces as well as governmental ones. But the unregulated internet — or rather, the selectively regulated internet — is exacerbating, not ameliorating, this problem. The current state of Section 230 may ensure free speech for the privileged few; protecting free speech for all requires reform.
The concept of “cyber civil rights”150 speaks precisely to the reality that the internet has rolled back many gains made for racial and gender equality. The anonymity, amplification, and aggregation possibilities offered by the internet have allowed private actors to discriminate, harass, and threaten vulnerable groups on a massive scale.151 There is empirical evidence showing that the internet has been used to further chill the intimate, artistic, and professional expression of individuals whose rights were already under assault offline.152
Even as the internet has multiplied the possibilities of expression, it has multiplied the possibilities of repression.153 The new forms of communication offered by the internet have been used to unleash a regressive and censorious backlash against women, racial minorities, and sexual minorities. The internet lowers the costs of engaging in abuse by providing abusers with anonymity and social validation, while providing new ways to increase the range and impact of that abuse. The online abuse of women in particular amplifies sexist stereotyping and discrimination, compromising gender equality online and off.154
The reality of unequal free speech rights demonstrates how regulation can, when done carefully and well, enhance and diversify speech rather than chill it. According to a 2017 study, regulating online abuse “may actually facilitate and encourage more speech, expression, and sharing by those who are most often the targets of online harassment: women.”155 The study’s author suggests that when women “feel less likely to be attacked or harassed,” they become more “willing to share, speak, and engage online.” Knowing that there are laws criminalizing online harassment and stalking “may actually lead to more speech, expression, and sharing online among adult women online, not less.” As expressed in the title of a recent article by one of us (Citron) and Jonathon Penney, sometimes “law frees us to speak.”156
IV. Moving Beyond the Myths: A Menu of Potential Solutions
Having addressed misconceptions about the relationship between Section 230 and the First Amendment, state and private actors, and regulation and free speech outcomes, we turn to reform proposals that address the problems that actually exist and are legitimately concerning. This Part explores different possibilities for fixing the overbroad interpretation of Section 230.
A. Against Carveouts
Some reformers urge Congress to maintain Section 230’s immunity but to create an explicit exception from its legal shield for certain types of behavior. A recent example of that approach is the Stop Enabling Sex Traffickers Act (SESTA),157 which passed by an overwhelming vote in 2016. The bill amended Section 230 by rendering websites liable for knowingly hosting sex trafficking content.158
That law, however, is flawed. By effectively pinning the legal shield on a platform’s lack of knowledge of sex trafficking, the law arguably reprises the dilemma that led Congress to pass Section 230 in the first place. To avoid liability, some platforms have resorted to either filtering everything related to sex or sitting on their hands so they cannot be said to have knowingly facilitated sex trafficking.159 That is the opposite of what the drafters of Section 230 claimed to want—responsible content moderation practices.
While we sympathize with the impulse to address particularly egregious harms, the best way to reform Section 230 is not through a piecemeal approach. The carveout approach is inevitably underinclusive, establishing a normative hierarchy of harms that leaves other harmful conduct to be addressed another day. Such an approach would require Section 230’s exceptions to be regularly updated, an impractical option given the slow pace of congressional efforts and partisan deadlock.160
B. A Modest Proposal—Speech, Not Content
In light of the observations made in Part II.A., one simple reform of Section 230 would be to make explicitly clear that the statute’s protections only apply to speech. The statutory fix is simple: replace the word “information” in (c)(1) with the word “speech.” Thus, that section of the statute would read:
(1) Treatment of publisher or speaker: No provider or user of an interactive computer service shall be treated as the publisher or speaker of any speech provided by another information content provider.
This revision would put all parties in a Section 230 case on notice that the classification of content as speech is not a given, but a fact to be demonstrated. If a platform cannot make a showing that the content or information at issue is speech, then it should not be able to take advantage of Section 230 immunity.
C. Excluding Bad Samaritans
Another effective and modest adjustment would involve amending Section 230 to exclude bad actors from its legal shield. There are a few ways to do this. One possibility would be to deny the immunity to online service providers that “deliberately leave up unambiguously unlawful content that clearly creates a serious harm to others.”161 Another would be to exclude from the immunity “the very worst actors:” sites encouraging illegality or that principally host illegality.162 Yet another approach would be to exclude intermediaries who exhibit deliberate indifference to unlawful content or conduct.
A variant on this theme would deny the legal shield to cases involving platforms that have solicited or induced unlawful content. This approach takes a page from intermediary liability rules in trademark and copyright law. As Stacey Dogan observed in that context, inducement doctrines allow courts to target bad actors whose business models center on infringement.163 Providers that solicit or induce unlawful content should not enjoy immunity from liability. This approach targets the harmful activity while providing breathing space for protected expression.164
A version of this approach is embraced in the SHIELD Act of 2019,165 which one of us (Franks) assisted in drafting and the other (Citron) supported in advising lawmakers on behalf of the Cyber Civil Rights Initiative. Because SHIELD is a federal criminal bill, Section 230 could not be invoked to defend violations of it. However, the proposed bill creates a separate liability standard for providers of communications services that effectively grants them Section 230 immunity so long as the provider does not intentionally solicit, or knowingly and predominantly distribute, content that the provider actually knows is in violation of the statute.166
D. Conditioning the Legal Shield on Reasonable Content Moderation
There is a broader legislative fix that Benjamin Wittes and one of us (Citron) have proposed. Under that proposal, platforms would enjoy immunity from liability if they could show that their content-moderation practices writ large are reasonable. The revision to Section 230(c)(1) would read as follows:
No provider or user of an interactive computer service that takes reasonable steps to address unlawful uses of its service that clearly create serious harm to others shall be treated as the publisher or speaker of any information provided by another information content provider in any action arising out of the publication of content provided by that information content provider.
If adopted, the question before the courts in a motion to dismiss on Section 230 grounds would be whether a defendant employed reasonable content moderation practices in the face of unlawful activity that manifestly causes harm to individuals. The question would not be whether a platform acted reasonably with regard to a specific use of the service. Instead, the court would ask whether the provider or user of a service engaged in reasonable content moderation practices writ large with regard to unlawful uses that create serious harm to others.167
Congressman Devin Nunes has argued that reasonableness is a vague and unworkable policy,168 while Eric Goldman considers the proposal a “radical change that would destroy Section 230.” In Goldman’s estimation, “such amorphous eligibility standards” makes “Section 230 litigation far less predictable, and it would require expensive and lengthy factual inquiries into all evidence probative of the reasonableness of defendant’s behavior.”169
Yes, a reasonableness standard would require evidence of a site’s content moderation practices. But impossibly vague or amorphous it is not. Courts have assessed the reasonableness of practices in varied fields, from tort law to the Fourth Amendment’s ban on unreasonable searches and seizures.170 In a wide variety of contexts, the judiciary has invested the concept of reasonableness with meaning.171 As John Goldberg and Benjamin Zipursky have argued, tort law sets norms of behavior in recognizing wrongful, injury-inflicting conduct, and it empowers victims to seek redress.172
Courts are well suited to address the reasonableness of a platform’s speech policies and practices vis-à-vis particular forms of illegality that cause clear harm to others (at the heart of a litigant’s claims). The reasonableness inquiry would begin with the alleged wrongdoing and liability. To state the obvious, platforms are not strictly liable for all content posted on their sites. Plaintiffs need a cognizable theory of relief to assert against content platforms. Section 230’s legal shield would turn on whether the defendant employed reasonable content moderation practices to deal with the specific kind of harmful illegality alleged in the suit.
There is no one-size-fits-all approach to reasonable content moderation. Reasonableness would be tailored to the harmful conduct alleged in the case. A reasonable approach to sexual-privacy invasions would be different from a reasonable approach to spam or fraud. The question would then be whether the online platform—given its size, user base, and volume—adopted reasonable content moderation practices vis-à-vis the specific illegality in the case. Did the platform have clear rules and a process to deal with complaints about illegal activity? What did that process entail? The assessment of reasonable content-moderation practices would take into account differences among content platforms. A blog with a few postings a day and a handful of commenters is in a different position than a social network with millions of postings a day. The social network could not plausibly respond to complaints of abuse immediately, let alone within a day or two, whereas the blog could. On the other hand, the social network and the blog could deploy technologies to detect and filter content that they previously determined was unlawful.173
Suppose a porn site is sued for public disclosure of private facts and negligent enablement of a crime. The defendant’s site, which hosts hundreds of thousands of videos, encourages users to post porn videos. The defendant’s terms of service (TOS) ban nonconsensual pornography and doxing (the posting of someone’s contact information). In the complaint, the plaintiff alleges that her nude photo and home address were posted on the defendant’s site without her consent. Following this disclosure, strangers came to the plaintiff’s house at night demanding sex. One of those strangers broke into her house. Although the plaintiff immediately reported the post as a TOS violation, defendant did nothing for three weeks.
Defendant moves to dismiss the complaint on Section 230 grounds. It submits evidence showing that it has a clear policy against nonconsensual pornography and a process to report abuse. Defendant acknowledges that its moderators did not act quickly enough in plaintiff’s case, but maintains that generally speaking its practices satisfy the reasonableness inquiry. However, defendant offers no evidence showing its engagement in any content moderation at all.
Is there sufficient evidence that the defendant engaged in reasonable content moderation practices so that the court can dismiss the case against it? Likely no. Yes, the defendant has clearly stated standards notifying users that it bans nonconsensual pornography. And yet the site has provided no proof that it has a systematic process to consider complaints about such illegality.174 In assessing reasonableness, it would matter to the court that the site has thousands of videos to moderate. The volume of the content is relevant to the likelihood of potential harm and the requirements to address such harm. The absence of a systematic process to respond to complaints of nonconsensual pornography shows the absence of reasonableness in the site’s practices writ large.175
A reasonableness standard would not “effectively ‘lock in’ certain approaches, even if they are not the best or don’t apply appropriately to other forms of content,” as critics suggest.176 The promise of a reasonableness approach is its elasticity. As technology and content moderation practices changes, so will the reasonableness of practices. As new kinds of harmful online activity emerge so will the strategies for addressing them. At the same time, a reasonableness approach would pave the way for the development of norms around content moderation practices, such as having clear policies in place, accessible reporting systems, and content moderation practices tailored to particular forms of illegality.
A reasonable standard of care will reduce opportunities for abuse without discouraging further development of a vibrant internet or turning innocent platforms into involuntary insurers for those injured through their sites. Approaching the problem of addressing online abuse by setting an appropriate standard of care readily allows differentiation among different kinds of online actors. Websites that solicit illegality or that refuse to address unlawful activity that clearly creates serious harm should not enjoy immunity from liability. On the other hand, platforms that have safety and speech policies that are transparent and reasonably executed at scale should enjoy the immunity from liability as the drafters of Section 230 intended.
V. Conclusion
Reforming Section 230 is long overdue. With Section 230, Congress sought to provide incentives for “Good Samaritans” to engage in efforts to moderate content. That goal was laudable. But market pressures and morals are not always enough, and they should not have to be.
A crucial component in any reform project is clear-eyed thinking. And yet clear-eyed thinking about the internet is often difficult. The Section 230 debate is, like many other tech policy reform projects, beset by misconceptions. We have taken this opportunity to dispel myths around Section 230 so that this reform moment, a long time coming and anticipated, is not wasted or exploited.
- 1See generally Danielle Keats Citron, Hate Crimes in Cyberspace (2014); see also Danielle Keats Citron & Benjamin Wittes, The Internet Will Not Break: Denying Bad Samaritans Section 230 Immunity, 86 Fordham L. Rev. 401 (2017); Danielle Keats Citron, Cyber Civil Rights, 89 B.U. L. Rev. 61 (2009); Mary Anne Franks, Sexual Harassment 2.0, 71 Md. L. Rev. 655 (2012).
- 2That is, beyond the select avenues that currently are not shielded from liability, such as intellectual property, federal criminal law, the Electronic Communications Privacy Act, and the knowing facilitation of sex trafficking.
- 347 U.S.C. § 230 (2018). According to Blake Reid, the most accurate citation for the law is “Section 230 of the Communications Act of 1934”; we have retained “Section 230 of the Communications Decency Act” because of its common usage. Blake Reid, Section 230 of… What?, blake.e.reid (Sept. 4, 2020), https://blakereid.org/section-230-of-what/ [https://perma.cc/DUL6-DKK2].
- 4See generally Hearing on Fostering a Healthier Internet to Protect Consumers Before the H. Comm. on Energy and Commerce, 116th Cong. (2019) (statement of Danielle Keats Citron, Professor, B.U. Law Sch.) (available at https://docs.house.gov/meetings/IF/IF16/20191016/110075/HHRG-116-IF16-Wstate-CitronD-20191016.pdf [https://perma.cc/9F2V-BHKL]).
- 5Alina Selyukh, Section 230: A Key Legal Shield for Facebook, Google is About to Change, NPR (Mar. 21, 2018), https://www.npr.org/sections/alltechconsidered/2018/03/21/591622450/section-230-a-key-legal-shield-for-facebook-google-is-about-to-change [https://perma.cc/FG5N-MJ5T].
- 6See Stratton Oakmont, Inc. v. Prodigy Servs. Co., No. 31063/94, 1995 WL 323710 (N.Y. Sup. Ct. 1995); see also Jeff Kosseff, The Twenty-Six Words that Created the Internet (2019) (offering an excellent history of Section 230 and the cases leading to its passage).
- 7Citron, Hate Crimes in Cyberspace, supra note NOTEREF _Ref35711629 \h \* MERGEFORMAT 1 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310031003600320039000000 , at 170–73.
- 8Citron & Wittes, supra note NOTEREF _Ref35711629 \h \* MERGEFORMAT 1 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310031003600320039000000 , at 404–06.
- 9Id.
- 10Id.
- 11See CDA 230: The Most Important Law Protecting Internet Speech, Electronic Frontier Found., https://www.eff.org/issues/cda230 [https://perma.cc/W75F-6MRN].
- 12See Danielle Keats Citron & Quinta Jurecic, Platform Justice: Content Moderation at an Inflection Point at 1, 4 (Hoover Inst., Aeigis Series Paper No. 1811, 2018), https://www.hoover.org/sites/default/files/research/docs/citron-jurecic_webreadypdf.pdf [https://perma.cc/6XZY-9HBF].
- 13[1]See Sen. Cruz: Latest Twitter Bias Underscores Need for Big Tech Transparency, U.S. Senator for Tex. Ted Cruz (Aug. 16, 2019), https://www.cruz.senate.gov/?p=press_release&id=4630 [https://perma.cc/23UU-SWF7].
- 14Marguerite Reardon, Democrats and Republicans Agree that Section 230 is Flawed, CNET (June 21, 2020), https://www.cnet.com/news/democrats-and-republicans-agree-that-section-230-is-flawed/ [https://perma.cc/6VJG-DW5W].
- 15See Hearing on “Fostering a Healthier Internet to Protect Consumers,” House Committee on Energy & Com., https://energycommerce.house.gov/committee-activity/hearings/hearing-on-fostering-a-healthier-internet-to-protect-consumers [https://perma.cc/4YK2-595J]. Witnesses also included computer scientist Hany Farid of the University of California at Berkeley, Gretchen Petersen of the Alliance to Counter Crime Online, Corynne McSherry of the Electronic Frontier Foundation, Steve Huffman of Reddit, and Katie Oyama of Google. Id. At that hearing, one of us (Citron) took the opportunity to combat myths around Section 230 and offer sensible reform possibilities, which we explore in Part III.
- 16See Section 230 Workshop—Nurturing Innovation or Fostering Unaccountability?, U.S. Dep’t of Just. (Feb. 19, 2020), https://www.justice.gov/opa/video/section-230-workshop-nurturing-innovation-or-fostering-unaccountability [https://perma.cc/PQV2-MZGZ]. The roundtable raised issues explored here as well as questions about encryption, which we do not address here.
- 17There are several House and Senate proposals to amend or remove Section 230’s legal shield.
- 18See Adi Robertson, Why The Internet’s Most Important Law Exists and How People are Still Getting it Wrong, Verge (June 21, 2019), https://www.theverge.com/2019/6/21/18700605/section-230-internet-law-twenty-six-words-that-created-the-internet-jeff-kosseff-interview [https://perma.cc/6ALQ-XN43]; see also Matt Laslo, The Fight Over Section 230—and the Internet as We Know It, Wired (Aug. 13, 2019), https://www.wired.com/story/fight-over-section-230-internet-as-we-know-it/ [https://perma.cc/D9XG-BYB5].
- 19See Dylan Byers, Senate Fails its Zuckerberg Test, CNN Bus. (Apr. 11, 2018), https://money.cnn.com/2018/04/10/technology/senate-mark-zuckerberg-testimony/index.html [https://perma.cc/Y2M6-3RMG]. The 2018 congressional hearings on the Cambridge Analytica data leak poignantly illustrate the point. In questioning Facebook CEO Mark Zuckerberg for several days during his testimony before the House and the Senate, some lawmakers made clear that they had never used the social network and had little understanding of online advertising, which is the dominant tech companies’ business model. To take one example of many, Senator Orrin Hatch asked Zuckerberg how his company made money since it does not charge users for its services. See Hearing on Facebook, Social Media Privacy, and the Use and Abuse of Data Before the S. Comm. On the Judiciary, 115th Cong. (2018); see also Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power 479–88 (2019). As is clear from committee hearings and our work, however, there are lawmakers and staff devoted to tackling tech policy, including Senator (now Vice President–Elect) Kamala Harris, Senator Richard Blumenthal, Senator Mark Warner, Congresswoman Jackie Speier, and Congresswoman Kathleen Clark, who exhibit more familiarity and knowledge with tech companies and their practices.
- 20According to conventional wisdom, it can take years for bills to become law. Perhaps unsurprisingly, the process is speedier when lawmakers’ self-interests hang in the balance. The Video Privacy Protection Act’s rapid-fire passage is an obvious case in point. That law passed in less than a year’s time after the failed nomination of Judge Robert Bork to the Supreme Court revealed that journalists could easily obtain people’s video rental records. Video Privacy Protection Act, Wikipedia (Sept. 2, 2020), https://en.wikipedia.org/wiki/Video_Privacy_Protection_Act [https://perma.cc/8WJD-JB2P]. Lawmakers fearing that their video rental records would be released to the public passed VPPA in short order. Id.
- 21Or at least this is the most generous reading of its history. See Mary Anne Franks, the Cult of the Constitution (2019) (showing that one of us (Franks) is somewhat more skeptical about the narrative that Section 230’s flaws were not evident at its inception).
- 22Selyukh, supra note NOTEREF _Ref44064266 \h \* MERGEFORMAT 5 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600340034003000360034003200360036000000 .
- 23See Citron & Wittes, supra note NOTEREF _Ref35711629 \h \* MERGEFORMAT 1 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310031003600320039000000 , at 406.
- 24See id. at 418.
- 25Kosseff, supra note NOTEREF _Ref44064339 \h \* MERGEFORMAT 6 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600340034003000360034003300330039000000 , at 71–74; Citron, Cyber Civil Rights, supra note NOTEREF _Ref35711629 \h \* MERGEFORMAT 1 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310031003600320039000000 .
- 26Id. at 403.
- 27Id. at 408.
- 28Communications Decency Act, 47 U.S.C. § 230(c)(1) (1996).
- 29Id. § 230(c)(2).
- 30Id. § 230(e).
- 31See Danielle Keats Citron, Section 230’s Challenge to Civil Rights and Civil Liberties, Knight First Amend. Inst. (Apr. 6, 2018), https://knightcolumbia.org/content/section-230s-challenge-civil-rights-and-civil-liberties [https://perma.cc/ARY6-KTE8].
- 32See id.
- 33See id.
- 34See Citron & Wittes, supra note NOTEREF _Ref35711629 \h \* MERGEFORMAT 1 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310031003600320039000000 , at 406–10; Mary Anne Franks, How the Internet Unmakes the Law, 16 Ohio St. Tech. L. J. 10 (2020); see also Olivier Sylvain, Recovering Tech’s Humanity, 119 Colum. L. Rev. Forum 252 (2020) (explaining that “common law has not had a meaningful hand in shaping intermediaries’ moderation of user-generated content because courts, citing Section 230, have foresworn the law’s application).
- 35Franks, How the Internet Unmakes the Law, supra note NOTEREF _Ref35713582 \h \* MERGEFORMAT 34 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310033003500380032000000 , at 17–22.
- 36See id.
- 37See Citron, Section 230’s Challenge to Civil Rights and Civil Liberties, supra note NOTEREF _Ref51609375 \h \* MERGEFORMAT 31 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600350031003600300039003300370035000000 . See generally Olivier Sylvain, Intermediary Design Duties, 50 Conn. L. Rev. 1 (2017).
- 38Jane Doe No. 1 v. Backpage.com, LLC, 817 F.3d 12, 18 (1st Cir. 2016), cert. denied, 137 S. Ct. 622 (2017).
- 39See, e.g., Barnes v. Yahoo!, Inc., 570 F.3d 1096, 1099 (9th Cir. 2009).
- 40See Mary Anne Franks, The Lawless Internet? Myths and Misconceptions About CDA Section 230, Huffington Post (Feb. 17, 2014), https://www.huffpost.com/entry/section-230-the-lawless-internet_b_4455090 [https://perma.cc/R6SF-X4WQ].
- 41See, e.g., Hinton v. Amazon.com.DEDC, LLC, 72 F. Supp. 3d 685, 687–90 (S.D. Miss. 2014); see also Franks, How the Internet Unmakes the Law, supra note NOTEREF _Ref35713582 \h \* MERGEFORMAT 34 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310033003500380032000000 , at 14.
- 42See Armslist Firearm Marketplace, https://www.armslist.com/ [https://perma.cc/VX34-GVB4].
- 43See id.
- 44See id.
- 45See id.
- 46See id.
- 47See Daniel v. Armslist, LLC, 926 N.W.2d 710, cert. denied, 140 S. Ct. 562 (2019). The non-profit organization the Cyber Civil Rights Initiative, of which one of us (Franks) is the President and one of us (Citron) is the Vice President, filed an amicus brief in support of the petitioner’s request for writ of certiorari in the Supreme Court. See Brief for the Cyber Civil Rights Initiative and Legal Scholars et al. as Amici Curiae Supporting Petitioners, Daniel v. Armslist, LLC, 140 S. Ct. 562 (2019) (No. 19-153).
- 48See Brief for the Cyber Civil Rights Initiative and Legal Scholars et al. as Amici Curiae Supporting Petitioners at 16, Daniel v. Armslist, LLC, 140 S. Ct. 562 (2019) (No. 19-153).
- 49Rebecca Tushnet, Power without Responsibility: Intermediaries and the First Amendment, 76 Geo. Wash. L. Rev. 986, 1002 (2008).
- 50See Mary Anne Franks, Justice Beyond Dispute, 131 Harv. L. Rev. 1374, 1386 (2018) (reviewing Ethan Katsh & Orna Rabinovich-Einy, Digital Justice: Technology and the Internet of Disputes (2017)).
- 51Danielle Keats Citron, Cyber Mobs, Disinformation, and Death Videos: The Internet As It Is (and As It Should Be), 118 Mich. L. Rev. 1073 (2020).
- 52See id.
- 53Kim Lyons, Twitter allowed ad targeting based on ‘neo-Nazi’ keyword, Verge (Jan. 16, 2020), https://www.theverge.com/2020/1/16/21069142/twitter-neo-nazi-keywords-ad-targeting-bbc-policy-violation [https://perma.cc/RQ9G-S5AT].
- 54See Dissenting Statement of Federal Trade Commissioner Rohit Chopra, In re Facebook, Inc., Commission File No. 1823109, at 2 (July 24, 2019).
- 55Id.
- 56Id.
- 57Id.
- 58See Franks, Justice Beyond Dispute, supra note NOTEREF _Ref35714122 \h \* MERGEFORMAT 50 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310034003100320032000000 , at 1386.
- 59See Danielle Keats Citron, Extremist Speech, Compelled Conformity, and Censorship Creep, 93 Notre Dame L. Rev. 1035, 1039 (2018); see also Danielle Keats Citron & Helen Norton, Intermediaries and Hate Speech: Fostering Digital Citizenship for the Information Age, 91 B.U. L Rev. 1435, 1468–71 (2011).
- 60See id. at 1038–39.
- 61See Mary Anne Franks, “Revenge Porn” Reform: A View from the Front Lines, 69 Fla. L. Rev. 1252, 1312 (2017).
- 62Id. at 1037.
- 63See Citron, Hate Crimes in Cyberspace, supra note NOTEREF _Ref35711629 \h \* MERGEFORMAT 1 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310031003600320039000000 , at 229 (discussing how Facebook changed its position on pro-rape pages after fifteen companies threatened to pull their ads); see also Franks, “Revenge Porn” Reform: A View from the Front Lines, supra note NOTEREF _Ref35718719 \h \* MERGEFORMAT 61 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310038003700310039000000 , at 1312.
- 64See Deeptrace Labs, The State of Deepfakes: Landscape, Threats, and Impact, Deeptrace.com (Sept. 2019), https://storage.googleapis.com/deeptrace-public/Deeptrace-the-State-of-Deepfakes-2019.pdf [https://perma.cc/J2ML-2G2Y] (noting that eight of the top ten pornography websites host deepfake pornography, and there are nine deepfake pornography websites hosting 13,254 fake porn videos (mostly featuring female celebrities without their consent). These sites generate income from advertising. Indeed, as the first comprehensive study of deepfake video and audio explains, “deepfake pornography represents a growing business opportunity, with all of these websites featuring some form of advertising”).
- 65See id.
- 66Eugene Volokh, TheDirty.com not liable for defamatory posts on the site, Wash. Post, (June 16, 2014), https://www.washingtonpost.com/news/volokh-conspiracy/wp/2014/06/16/thedirty-com-not-liable-for-defamatory-posts-on-the-site/ [https://perma.cc/5FBB-2B59].
- 67Citron, Hate Crimes in Cyberspace, supra note NOTEREF _Ref35711629 \h \* MERGEFORMAT 1 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310031003600320039000000 , at 171.
- 68See Mary Anne Franks, Moral Hazard on Stilts: ‘Zeran’s’ Legacy, Law.com (Nov. 10, 2017), https://www.law.com/therecorder/sites/therecorder/2017/11/10/moral-hazard-on-stilts-zerans-legacy/ [https://perma.cc/74DL-B7BK].
- 69See generally Citron, Hate Crimes in Cyberspace, supra note NOTEREF _Ref35711629 \h \* MERGEFORMAT 1 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310031003600320039000000 . See Maeve Duggan, Online Harassment 2017 Study, Pew Res. Ctr. (July 11, 2017), https://www.pewresearch.org/internet/2017/07/11/online-harassment-2017/ [https://perma.cc/7H6B-VAP2] (noting that the 2017 Pew study found that one in four Black individuals say they have been subject to online harassment due to their race; one in ten Hispanic individuals have said the same. For white individuals, the share is far lower: just three percent. Women are twice as likely as men to say they have been targeted online due to their gender (11 percent versus 5 percent)); see also Data & Society, Online Harassment, Digital Abuse, and Cyberstalking in America, Ctr. for Innovative Pub. Health Res., (Nov. 21, 2016), https://innovativepublichealth.org/wp-content/uploads/2_Online-Harassment-Report_Final.pdf [https://perma.cc/P5M8-CARR] (showing that other studies have made clear that LGBTQ individuals are particularly vulnerable to online harassment, and nonconsensual pornography).
- 70See Duggan, supra note NOTEREF _Ref44064845 \h \* MERGEFORMAT 69 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600340034003000360034003800340035000000 .
- 71Citron, Hate Crimes in Cyberspace, supra note NOTEREF _Ref35711629 \h \* MERGEFORMAT 1 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310031003600320039000000 , at 13–14.
- 72Id.
- 73See Franks, Cult of the Constitution, supra note NOTEREF _Ref35719531 \h \* MERGEFORMAT 21 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310039003500330031000000 , at 197.
- 74Id.
- 75Citron, Hate Crimes in Cyberspace, supra note NOTEREF _Ref35711629 \h \* MERGEFORMAT 1 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310031003600320039000000 , at 13–14.
- 76See Jonathon W. Penney, Chilling Effects: Online Surveillance and Wikipedia Use, 31 Berkeley Tech. L.J. 117, 125–26 (2016); see also Jonathon W. Penney, Internet Surveillance, Regulation, and Chilling Effects Online: A Comparative Case Study, 6 Internet Pol’y Rev. 1, 3 (2017). See generally Citron, Hate Crimes in Cyberspace, supra note NOTEREF _Ref35711629 \h \* MERGEFORMAT 1 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310031003600320039000000 , at 192–95; Danielle Keats Citron, Civil Rights In Our Information Age, in The Offensive Internet (Saul Levmore & Martha C. Nussbaum, eds. 2010); Citron & Richards, infra note NOTEREF _Ref35715079 \h \* MERGEFORMAT 132 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310035003000370039000000 , at 1365 (“[N]ot everyone can freely engage online. This is especially true for women, minorities, and political dissenters who are more often the targets of cyber mobs and individual harassers.”); Danielle Keats Citron & Mary Anne Franks, Criminalizing Revenge Porn, 49 Wake Forest L. Rev. 345, 385 (2014); Citron, Cyber Civil Rights, supra note NOTEREF _Ref35711629 \h \* MERGEFORMAT 1 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310031003600320039000000 , at 67, 104–05; Franks, Cult of the Constitution, supra note NOTEREF _Ref35719531 \h \* MERGEFORMAT 21 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310039003500330031000000 , at 197.
- 77See Citron, Cyber Civil Rights, supra note NOTEREF _Ref35711629 \h \* MERGEFORMAT 1 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310031003600320039000000 .
- 78Katie Hill, for instance, resigned from Congress after her estranged husband disclosed intimate photos of her and another woman without consent. See generally Rebecca Green, Candidate Privacy, 95 Wash. L. Rev. 205 (2020).
- 79See, e.g., Michelle Ferrier, Attacks and Harassment: The Impact on Female Journalists and Their Reporting, Int’l Women’s Media Found. 7 (2018), https://www.iwmf.org/wp-content/uploads/2018/09/Attacks-and-Harassment.pdf [https://perma.cc/3B79-FJF80; see also Women Journalists and the Double Blind: Choosing silence over being silenced, Ass’n for Progressive Commc’n (2018) https://www.apc.org/sites/default/files/Gendering_Self-Censorship_Women_and_the_Double_Bind.pdf [https://perma.cc/F5V5-538U] (providing statistics on self-censorship by female journalists in Pakistan); Internet Health Report 2019, Mozilla Found. 64 (2019) https://www.transcript-verlag.de/media/pdf/1a/ce/ac/oa9783839449462.pdf [https://perma.cc/3M2G-GHVF] (“Online abusers threaten and intimidate in an effort to silence the voices of especially women, nonbinary people, and people of color.”).
- 80See Danielle Keats Citron, Sexual Privacy, 128 Yale L.J. 1870, 1916 (2019).
- 81See generally Franks, The Cult of the Constitution, supra note NOTEREF _Ref35719531 \h \* MERGEFORMAT 21 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310039003500330031000000 .
- 82Citron & Wittes, supra note NOTEREF _Ref35711629 \h \* MERGEFORMAT 1 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310031003600320039000000 .
- 83FOSTA-SESTA stands as a case in point. One of us (Citron) worked closely with federal lawmakers on the FOSTA-SESTA bills only to be sorely disappointed with the results. See Part IV.
- 84See Elizabeth Nolan Brown, Section 230 Is the Internet’s First Amendment. Now Both Republicans and Democrats Want to Take it Away., Reason (July 29, 2019), https://reason.com/2019/07/29/section-230-is-the-internets-first-amendment-now-both-republicans-and-democrats-want-to-take-it-away/ [https://perma.cc/EW8Z-GVF7].
- 85See Makena Kelly, Conservative Groups Push Congress Not to Meddle with Internet Law, Verge (July 10, 2019), https://www.theverge.com/2019/7/10/20688778/congress-section-230-conservative-internet-law-content-moderation [https://perma.cc/W5ZA-FH29].
- 86Eric Goldman Why Section 230 Is Better than the First Amendment, 95 Notre Dame L. Rev. Reflections 33, 33 (2019).
- 8747 U.S.C § 230(a)(3).
- 88See Franks, How the Internet Unmakes the Law, supra note NOTEREF _Ref35713582 \h \* MERGEFORMAT 34 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310033003500380032000000 .
- 89Kosseff, supra note NOTEREF _Ref44064339 \h \* MERGEFORMAT 6 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600340034003000360034003300330039000000 , at 59–61; Citron & Richards, infra note NOTEREF _Ref35715079 \h \* MERGEFORMAT 132 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310035003000370039000000 ; Sylvain, supra note NOTEREF _Ref35719698 \h \* MERGEFORMAT 37 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310039003600390038000000 , at 19 (“back then think electronic bulletin boards, online chatrooms, and newsgroups.”).
- 90See J. Clement, Internet Usage in the United States - Statistics & Facts, Statista (Aug. 20, 2019), https://www.statista.com/topics/2237/internet-usage-in-the-united-states/ [https://perma.cc/U8U7-BEVR].
- 91See Citron, Hate Crimes in Cyberspace, supra note NOTEREF _Ref35711629 \h \* MERGEFORMAT 1 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310031003600320039000000 , at 191–92; J. Clement, Most Popular Online Activities of Adult Internet Users in the United States as of November 2017, Statista (Nov. 7, 2018), https://www.statista.com/statistics/183910/internet-activities-of-us-users/ [https://perma.cc/QA5D-6KBB].
- 92Nolan Brown, supra note NOTEREF _Ref35717430 \h \* MERGEFORMAT 84 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310037003400330030000000 .
- 93See Frederick Schauer, The Politics and Incentives of First Amendment Coverage, 56 Wm. & Mary L. Rev. 1613, 1617–18 (2015).
- 94Frederick Schauer, The Boundaries of the First Amendment: A Preliminary Exploration of Constitutional Salience, 117 Harv. L. Rev. 1765, 1769 (2004).
- 95See Citron, Cyber Civil Rights, supra note NOTEREF _Ref35711629 \h \* MERGEFORMAT 1 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310031003600320039000000 .
- 96Giboney v. Empire Storage & Ice Co., 336 U.S. 490, 502 (1949).
- 97See, e.g., Tinker v. Des Moines Indep. Cmty. Sch. Dist., 393 U.S. 503, 504 (1969) (wearing of black armbands conveyed message regarding a matter of public concern).
- 98See Texas v. Johnson, 491 U.S. 397, 406 (1989); United States v. O’Brien, 391 U.S. 367, 376–77 (1968).
- 99Tinker, 393 U.S. 503.
- 100Johnson, 491 U.S. 397.
- 101Citizens United v. Fed. Election Comm’n, 558 U.S. 310 (2010).
- 102O’Brien, 391 U.S. 467.
- 103See Goldman, supra note NOTEREF _Ref35716073 \h \* MERGEFORMAT 86 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310036003000370033000000 , at 6.
- 104See Sylvain, Intermediary Design Duties, supra note NOTEREF _Ref35719698 \h \* MERGEFORMAT 37 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310039003600390038000000 , at 28; see also Citron, Section 230’s Challenge to Civil Rights and Civil Liberties, supra note NOTEREF _Ref51609375 \h 31 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600350031003600300039003300370035000000 (arguing that claims about platforms’ user interfaces or designs do not involve speech but rather actions such as inducing breaches of trust or illegal discrimination).
- 105See United States v. Stevens, 559 U.S. 460, 468–69 (2010) (noting the existence of “well-defined and narrowly limited classes of speech, the prevention and punishment of which have never been thought to raise any Constitutional problem” (citing Chaplinsky v. New Hampshire, 315 U.S. 568 (1942)).
- 106Dun & Bradstreet, Inc. v. Greenmoss Builders, Inc., 472 U.S. 749, 758–59 (1985)
(quoting First Nat’l Bank of Bos. v. Bellotti, 435 U.S. 765, 776 (1978)).
- 107U.S. v Stevens, 559 U.S. 460, 468 (2010), superseded by statute, 48 U.S.C. § 48 (2012).
- 108Id.
- 109Id.
- 110See Kenneth S. Abraham & G. Edward White, First Amendment Imperialism and the Constitutionalization of Tort Liability, Tex. L. Rev. (forthcoming).
- 111See Leslie Kendrick, First Amendment Expansionism, 56 Wm. & Mary L. Rev. 1199, 1200 (2015) (explaining that freedom of speech is a “term of art that does not refer to all speech activities, but rather designates some area of activity that society takes, for some reason, to have special importance”).
- 112Id.
- 113See Selyukh, supra note NOTEREF _Ref44064266 \h \* MERGEFORMAT 5 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600340034003000360034003200360036000000 .
- 114See Ron Wyden, Floor Remarks: CDA 230 and SESTA, Medium (Mar. 21, 2018), https://medium.com/@RonWyden/floor-remarks-cda-230-and-sesta-32355d669a6e [https://perma.cc/6SY9-WCD9].
- 115[1]See Catherine Padhi, Ted Cruz vs. Section 230: Misrepresenting the Communications Decency Act, Lawfare (Apr. 20, 2018), https://www.lawfareblog.com/ted-cruz-vs-section-230-misrepresenting-communications-decency-act [https://perma.cc/CP39-2VGA].
- 116See David Ingram & Jane C. Timm, Why Republicans (and Even a Couple of Democrats) Want to Throw Out Tech’s Favorite Law, NBC News (Sept. 2, 2019), https://www.nbcnews.com/politics/congress/why-republicans-even-couple-democrats-want-throw-out-tech-s-n1043346 [https://perma.cc/5UFA-FATJ] (highlighting that Rep. Cox recently underscored the fact that, “nowhere, nowhere, nowhere does the law say anything about [neutrality]”).
- 11747 U.S.C. § 230(b)(4).
- 118See generally W. Va. State Bd. of Educ. v. Barnette, 319 U.S. 624 (1943); see Manhattan Cmty. Access Corp. v. Halleck, 139 S. Ct. 1921 (2019).
- 119See Cale G. Weisman, Ted Cruz made it clear he supports repealing tech platforms’ safe harbor, Fast Co. (Oct. 17, 2018), https://www.fastcompany.com/90252598/ted-cruz-made-it-clear-he-supports-repealing-tech-platforms-safe-harbor [https://perma.cc/X3AU-MAMC]; see also Mike Masnick, Senator Mark Warner Repeats Senator Ted Cruz’s Mythical, Made Up, Incorrect Claims About Section 230, Techdirt (Oct. 3, 2019), https://www.techdirt.com/articles/20190929/00171443090/senator-mark-warner-repeats-senator-ted-cruzs-mythical-made-up-incorrect-claims-about-section-230.shtml [https://perma.cc/5X2X-CVVT] (explaining that Democratic Senators have also reinforced this myth. For instance, Senator Mark Warner claimed that “there was a decision made that social media companies, and their connections, were going to be viewed as kind of just dumb pipes, not unlike a telco”).
- 120See Internet and Consumer Protection, C-Span (Oct. 16, 2019), https://www.c-span.org/video/?465331-1/google-reddit-officials-testify-internet-consumer-protection [https://perma.cc/8YME-TN4G].
- 121See Louie Gohmert, Gohmert Introduces Bill That Removes Liability Protections for Social Media Companies That Use Algorithms to Hide, Promote, or Filter User Content, U.S. Congressman Louie Gohmert (Dec. 20, 2018), https://gohmert.house.gov/news/documentsingle.aspx?DocumentID=398676 [https://perma.cc/GR8B-E3GP].
- 122@josh_hammer, Twitter (June 6, 2019, 1:12 PM), https://twitter.com/josh_hammer/status/1136697398481379331 [https://perma.cc/JN9C-8CFB].
- 123See Nolan Brown, supra note NOTEREF _Ref35717430 \h \* MERGEFORMAT 84 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310037003400330030000000 (explaining that Senator Hawley claimed in a tweet that Section 230’s legal shield was predicated on platforms serving as “for[a] for a true diversity of political discourse”).
- 124See Gohmert, supra note NOTEREF _Ref35717497 \h \* MERGEFORMAT 121 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310037003400390037000000 .
- 125Exec. Order. No. 13925, 85 F.R. 34079 (2020).
- 126See Senator Jon Kyl, Covington Interim Report, Covington Interim Report (Accessed Mar. 20, 2020), https://fbnewsroomus.files.wordpress.com/2019/08/covington-interim-report-1.pdf [https://perma.cc/8VWD-7YK5].
- 127See id. (noting that the audit found Facebook’s advertising policies prohibiting shocking and sensational content resulted in the rejection of pro-life ads focused on survival stories of infants born before full-term. Facebook adjusted its enforcement of this policy to focus on prohibiting ads only when the ad shows someone in visible pain or distress or where blood and bruising is visible).
- 128See Siva Vaidhyanathan, Why Conservatives Allege Big Tech is Muzzling Them, Atlantic (July 28, 2019), https://www.theatlantic.com/ideas/archive/2019/07/conservatives-pretend-big-tech-biased-against-them/594916/ [https://perma.cc/4N5L-QNKE].
- 129See, e.g., Mark Scott, Despite Cries of Censorship, Conservatives Dominate Social Media, Politico (Oct. 26, 2020), https://www.politico.com/news/2020/10/26/censorship-conservatives-social-media-432643 [https://perma.cc/US83-PEVB].
- 130See Citron & Richards, infra note NOTEREF _Ref35715079 \h \* MERGEFORMAT 132 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310035003000370039000000 , at 1361 (exploring how entities comprising our digital infrastructure, including search engines, browsers, hosts, transit providers, security providers, internet service providers, and content platforms, are privately-owned with certain exceptions like the Internet Corporation for Assigned Names and Numbers).
- 131See Manhattan Cmty. Access Corp., 139 S. Ct. 1921 (finding privately-owned cable television channel not a state actor).
- 132See Hearing on Stifling Free Speech: Technological Censorship and the Public Discourse Before S. Comm. On the Judiciary, 116th Congress (2019) (statement of Eugene Kontorovich, Prof. Geo. Mason Law Sch.) (available at https://www.judiciary.senate.gov/imo/media/doc/Kontorovich%20Testimony.pdf [https://perma.cc/BJ8S-8SHV]).
- 133See Danielle Keats Citron & Neil M. Richards, Four Principles for Digital Expression (You Won’t Believe #3!), 95 Wash. U. L. Rev. 1353, 1360 (2018).
- 134Id.
- 135See Hague v. C.I.O., 307 U.S. 496, 515 (1939).
- 136Cf. Commonwealth v. Davis, 39 N.E. 113, 113 (Mass. 1895), aff’d, Davis v. Massachusetts, 167 U.S. 43, 47 (1897).
- 137See Padhi, supra note NOTEREF _Ref35718295 \h \* MERGEFORMAT 115 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310038003200390035000000 .
- 138See Mary Anne Franks, The Free Speech Black Hole: Can the Internet Escape the Gravitational Pull of the First Amendment?, Knight First Amend. Inst. (Aug. 21, 2019), https://knightcolumbia.org/content/the-free-speech-black-hole-can-the-internet-escape-the-gravitational-pull-of-the-first-amendment [https://perma.cc/8MGE-M8G3].
- 139See id.; Citron & Richards, supra note NOTEREF _Ref35715079 \h \* MERGEFORMAT 133 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310035003000370039000000 , at 1371.
- 140See Manhattan Cmty. Access Corp., 139 S. Ct. at 1928.
- 141See Barnette, 319 U.S. at 641.
- 142Citron & Richards, supra note NOTEREF _Ref35715079 \h \* MERGEFORMAT 133 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310035003000370039000000 , at 1371.
- 143In connection with our work with CCRI, we have helped tech companies do precisely that. See generally Citron, Sexual Privacy, supra note NOTEREF _Ref56105412 \h 80 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600350036003100300035003400310032000000 ; Franks, “Revenge Porn” Reform: A View from the Front Lines, supra note NOTEREF _Ref35718719 \h \* MERGEFORMAT 61 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310038003700310039000000 .
- 14447 U.S.C. § 230(c)(2).
- 145See Citron & Richards, supra note NOTEREF _Ref35715079 \h \* MERGEFORMAT 133 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310035003000370039000000 , at 1374 (explaining that, of course, not all companies involved in providing our online experiences are alike in their power and privilege. “As a company’s power over digital expression grows closer to total (meaning there are few to no alternatives to express oneself online), the greater the responsibilities (via regulation) attendant to that power.” Companies running the physical infrastructure of the internet, such as internet service and broadband providers, have power over digital expression tantamount to governmental power. In locations where people only have one broadband provider in their area, being banned from that provider would mean no broadband internet access at all. The (now-abandoned) net neutrality rules were animated by precisely those concerns); see also Genevieve Lakier, The Problem Isn’t Analogies but the Analogies that Courts Use, Knight First Amend. Inst. (Feb. 26, 2018), https://knightcolumbia.org/content/problem-isnt-use-analogies-analogies-courts-use [https://perma.cc/6H7Z-XPNN]; Frank Pasquale, The Black Box Society (2014) (arguing that the power of search engines may warrant far more regulation than currently exists. Although social media companies are powerful, they do not have the kind of control over our online experiences as broadband providers or even search engines do. Users banned on Facebook could recreate a social network elsewhere, though it would be time consuming and likely incomplete); Citron & Richards, supra note NOTEREF _Ref35715079 \h \* MERGEFORMAT 133 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310035003000370039000000 , at 1374 (highlighting that dissatisfaction with Facebook has inspired people’s migration to upstart social network services like MeWe by exploring different non-constitutional ways that law can protect digital expression).
- 146See Mary Anne Franks, The Utter Incoherence of Trump’s Battle with Twitter, The Atlantic (May 30, 2020), https://www.theatlantic.com/ideas/archive/2020/05/the-utter-incoherence-of-trumps-battle-with-twitter/612367/ [https://perma.cc/5UNZ-4WPR].
- 147One of us (Franks) is skeptical of the argument that there is any legal theory that entitles people, especially government officials, to demand access or amplification to a private platform.
- 148At the symposium, Brian Leiter provided helpful comments on this point.
- 149See, e.g., Mary Anne Franks, Beyond ‘Free Speech for the White Man’: Feminism and the First Amendment Research Handbook on Feminist Jurisprudence (2019); Citron, Hate Crimes in Cyberspace, supra note NOTEREF _Ref35711629 \h \* MERGEFORMAT 1 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310031003600320039000000 ; Citron, Cyber Civil Rights, supra note NOTEREF _Ref35711629 \h \* MERGEFORMAT 1 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310031003600320039000000 .
- 150See Citron, Cyber Civil Rights, supra note NOTEREF _Ref35711629 \h \* MERGEFORMAT 1 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310031003600320039000000 , at 66; Danielle Keats Citron and Mary Anne Franks, Cyber Civil Rights in the Time of COVID-19, Harv. L. Rev. Blog (May 14, 2020), https://blog.harvardlawreview.org/cyber-civil-rights-in-the-time-of-covid-19/ [https://perma.cc/766J-JYBR].
- 151See Citron, Hate Crimes in Cyberspace, supra note NOTEREF _Ref35711629 \h \* MERGEFORMAT 1 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310031003600320039000000 , at 57–72; Mary Anne Franks, Unwilling Avatars: Idealism and Discrimination in Cyberspace, 20 Colum. J. Gender & L. 224, 227 (2011); Citron, Cyber Civil Rights, supra note NOTEREF _Ref35711629 \h \* MERGEFORMAT 1 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310031003600320039000000 , at 66–67, 69–72.
- 152See Penney, Internet Surveillance, Regulation, and Chilling Effects Online: A Comparative Case Study, supra note NOTEREF _Ref44066004 \h \* MERGEFORMAT 76 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600340034003000360036003000300034000000 .
- 153Franks, The Cult of the Constitution, supra note NOTEREF _Ref35719531 \h \* MERGEFORMAT 21 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310039003500330031000000 .
- 154Citron, Hate Crimes in Cyberspace, supra note NOTEREF _Ref35711629 \h \* MERGEFORMAT 1 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310031003600320039000000 , at 17.
- 155See Penney, Internet Surveillance, Regulation, and Chilling Effects Online: A Comparative Case Study, supra note NOTEREF _Ref44066004 \h \* MERGEFORMAT 76 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600340034003000360036003000300034000000 .
- 156See Jonathon W. Penney & Danielle Keats Citron, When Law Frees us to Speak, 87 Fordham L. Rev. 2318, 2319 (2018).
- 157Stop Enabling Sex Traffickers Act of 2017, S. 1693, 115th Cong. (2017).
- 158See id.
- 159See Citron & Jurecic, supra note NOTEREF _Ref35713118 \h \* MERGEFORMAT 12 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310033003100310038000000 .
- 160See Citron, Section 230’s Challenge to Civil Rights and Civil Liberties, supra note NOTEREF _Ref51609375 \h 31 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600350031003600300039003300370035000000 .
- 161E-mail from Geoffrey Stone, Professor of Law, Univ. of Chi. Law Sch., to author (Apr. 8, 2018) (on file with author).
- 162Citron, Hate Crimes in Cyberspace, supra note NOTEREF _Ref35711629 \h \* MERGEFORMAT 1 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310031003600320039000000 , at 177–78 (showing that one of us (Citron) supported this approach as an important interim step to broader reform).
- 163See Stacey Dogan, Principled Standards vs. Boundless Discretion: A Tale of Two Approaches to Intermediary Trademark Liability Online, 37 Colum. J.L. & Arts 503, 507–08 (2014).
- 164See id. at 508–09.
- 165H.R. 2896, 116th Cong. (1st Sess. 2019).
- 166See SHIELD Act of 2019, H.R. 2896, 116th Cong. § 2(a) (2019); see also Franks, Revenge Porn Reform: A View from the Front Lines, supra note NOTEREF _Ref35718719 \h \* MERGEFORMAT 61 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310038003700310039000000 (explaining the exception).
- 167Tech companies have signaled their support as well. For instance, IBM issued a statement saying that Congress should adopt the proposal and wrote a tweet to that effect as well. See Ryan Hagemann, A Precision Approach to Stopping Illegal Online Activities, IBM ThinkPolicy Lab (July 10, 2019), https://www.ibm.com/blogs/policy/cda-230/ [https://perma.cc/YXN7-3B5V]; see also @RyanLeeHagemann, Twitter (July 10, 2019), https://twitter.com/RyanLeeHagemann/status/1149035886945939457?s=20 [https://perma.cc/QE2G-U4LY] (“A special shoutout to @daniellecitron and @benjaminwittes, who helped to clarify what a moderate, compromise-oriented approach to the #Section230 debate looks like.”).
- 168See User Clip: Danielle Citron Explains Content Moderation, C-Span (June 14, 2019), https://www.c-span.org/video/?c4802966/user-clip-danielle-citron-explains-content-moderation [https://perma.cc/B48G-4FYJ] (portraying Congressman Devin Nunes questioning Danielle Keats Citron at a House Intelligence Committee hearing about deepfakes in June 2018); see also
Benjamin C. Zipursky, Reasonableness In and Out of Negligence Law, 163 Penn. L. Rev. 2131, 2135 (2015) (“For a term or a phrase to fall short of clarity because of vagueness is quite different from having no meaning at all, and both are different from having multiple meanings—being ambiguous.”).
- 169See Goldman, supra note NOTEREF _Ref35716073 \h \* MERGEFORMAT 86 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600330035003700310036003000370033000000 , at 45.
- 170See Zipursky, supra note NOTEREF _Ref44066559 \h \* MERGEFORMAT 168 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600340034003000360036003500350039000000 , at 2135 (noting that reasonableness is the hallmark of negligence claims by stating that “[t]he range of uses of ‘reasonableness’ in law is so great that a list is not an efficient way to describe and demarcate it”).
- 171This is not to suggest that all uses of the concept of reasonableness are sound or advisable. There is a considerable literature criticizing various features of reasonableness inquiries. In this piece, we endeavor to tackle the most salient critiques of reasonableness in the context of content moderation practices.
- 172John C.P. Goldberg & Benjamin C. Zipursky, Recognizing Wrongs 29 (2020). Goldberg and Zipursky contend that tort law is not about setting prices for certain activity or allocating costs to cheapest cost avoider. Id. at 46–47.
- 173See id. (discussing Facebook’s hashing initiative to address nonconsensual distribution of intimate images).
- 174Nonconsensual pornography here would likely amount to tortious activity—the public disclosure of private fact. Also, nonconsensual pornography is now a crime in 46 states, D.C., and Guam. See 46 States + DC + One Territory Now Have Revenge Porn Laws, Cyber Civ. Rts. Initiative, https://www.cybercivilrights.org/revenge-porn-laws/ [https://perma.cc/KH69-YV7T].
- 175We take this example from an interview that one of us (Citron) recently conducted in connection with a book project on sexual privacy. A woman’s nude photo was used in a deepfake sex video, which was posted on a porn site. The porn site had a policy against nonconsensual pornography but did nothing when victims reported abuse. See Danielle Keats Citron, The End of Privacy: How Intimacy Became Data and How to Stop It (unpublished manuscript) (on file with author).
- 176See Masnick, supra note NOTEREF _Ref51589743 \h \* MERGEFORMAT 119 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000D0000005F00520065006600350031003500380039003700340033000000 .