Something for Nothing: Untangling a Knot of Section 230 Solutions

Introduction

On May 26, 2020, then-President Trump tweeted out concerns about mail-in voting and the potential, as he saw it, that “[m]ail boxes will be robbed, ballots will be forged & even illegally printed out & fraudulently signed.”1 As part of its new policy on misleading content, Twitter applied a label to the tweet directing users to “[g]et the facts about mail-in ballots.”2 When users clicked the link, they were directed to a tweet from the account @TwitterSafety that advised users of the platform’s “civic integrity policy.”3 Twitter’s response to the President’s tweet stated that it “could confuse voters about what they need to do to receive a ballot and participate in the election process.”4

On May 28, 2020, President Trump issued an Executive Order titled “Preventing Online Censorship,” motivated by “troubling behaviors” by social media platforms “engaging in selective censorship that is harming our national discourse.”5 These troubling behaviors include “‘flagging’ content as inappropriate, even though it does not violate any stated terms of service” and “deleting content and entire accounts with no warning, no rationale, and no recourse.”6 Specifically, the Order accuses Twitter of “selectively decid[ing] to place a warning label on certain tweets in a manner that clearly reflects political bias.”7

The Order directed the National Telecommunications and Information Administration (NTIA) to file a rulemaking petition with the Federal Communications Commission (FCC) to seek clarification of a particularly contentious provision of the Communications Decency Act: Section 230.8 That Section provides “Good Samaritan” immunity for online hosts of third-party content that make a “good faith” effort to “restrict access” to indecent, illegal, or “otherwise objectionable” content.9

In his Order, the President alleged that the Good Samaritan provision is often “distorted to provide liability protection for online platforms that . . . engage in deceptive or pretextual actions . . . to stifle viewpoints with which they disagree.”10 The Order sought to prevent this abuse by clarifying the meaning of “good faith” and whether actions not in good faith will disqualify a platform from immunity.11

The President’s Order joined a chorus of political voices, mostly Republican, raising cries of politically biased content moderation on social media platforms.12 On the other side, Democrats have also complained that Section 230 does little to curb the spread of child exploitation content or political disinformation.13 Taken together, these arguments illustrate that Section 230 is critically flawed because social media platforms can abuse their immunity to moderate content in a biased or unfair manner, while not following through with the underlying purposes of Section 230: preserving free speech and making the Internet safer.14

The 116th Congress saw a flurry of activity to amend—or eliminate—Section 230. Eighteen bills were put forward between the House and Senate,15 though not one saw any action in committee.16 While the bills vary in their specific policy goals, they share similar approaches, including conditioning immunity on fair enforcement of terms of use and narrowing the category of “otherwise objectionable” content subject to protected Good Samaritan removal.17 Some proposals offer more robust statutory schemes, while others change as little as a few words of Section 230.18

Despite the vast menu of options for fixing Section 230, Congress has taken no action toward resolving the issue.19 This Note will build a comprehensive solution out of the variety of legislative proposals, with the goal of accomplishing the dual purposes of Section 230. Part I will summarize the history of Section 230 and illustrate through judicial decisions the legal dilemma that has arisen in applying the statutory immunity. Part II will survey notable legislative, executive, and academic solutions that have been put forward so far and identify those that will best promote free speech online while incentivizing platforms to combat illegal content. Part III will lay out some legal guidelines for structuring a proposal, while Part IV will weave all the strands together to create a proposal for a legislative solution that will more effectively promote Section 230’s purposes: to protect free speech and make the Internet safer.20

I. Background: About Section 230

A. The Communications Decency Act

The Communications Decency Act (CDA) was first passed in 1996 with the goal of protecting children from obscene and indecent content online by imposing criminal penalties on those who knowingly transmitted this content “over any telecommunications device, including the Internet.”21 Shortly before the bill was passed, it was amended to include immunity for online platforms that hosted third-party content.22 This provision was geared at resolving a critical First Amendment issue that arose with the advent of the Internet: whether an online platform could be liable for the defamatory or illegal content of third parties.23

Before the advent of Internet publishing, First Amendment doctrine assigned liability for the publication of defamatory content based on a distinction between publishers, distributors, and platforms.24 Publishers, like newspapers, exercise a degree of editorial control over their content and are liable for publishing defamatory content created by a third party only upon a showing of actual malice.25 Distributors, like bookstores and newsstands, are not expected to have reviewed every item they sell but can be liable if they are given notice of defamatory content and refuse to remove it.26 A platform, such as a telephone service provider, is categorically immune from liability for third-party defamatory content.27

As Internet-based forums made it possible for nearly anyone to post their opinion online, the legal distinctions regarding defamation—especially between a publisher and a distributor—became more difficult to apply with consistency.28 A dilemma arose, illustrated by two cases from the early 1990s.29 In Cubby, Inc. v. Compuserve, Inc., the Southern District of New York held that an Internet content host could totally avoid liability for defamatory third-party content by declining to moderate any content on its website (thereby acting as a distributor).30 In Stratton Oakmont, Inc. v. Prodigy Services Co., a New York state trial court found that an Internet content host acted as a publisher because it attempted to control the content of its forums, thereby exposing itself to liability for third-party content.31 Within this framework, Internet content hosts were disincentivized from taking action against objectionable or illegal content, since any effort to moderate third-party content that did not remove all illegal content could expose a platform to the full extent of liability.32 By contrast, Section 230’s Good Samaritan immunity supersedes the common law publisher/distributor distinction and incentivizes active content moderation by shielding Internet content hosts who choose to moderate from liability resulting from illegal content that may slip through the cracks.33

Shortly after being passed, Section 230’s indecency provisions were gutted by the Supreme Court in Reno v. ACLU.34 The Court found that the imposition of criminal sanctions for the transmission of obscene or indecent material was overly broad and violated the First Amendment since it would include content that was constitutionally protected.35 After Congress amended the statute, all that remained was Good Samaritan immunity.36

B. Section 230 in Action

Section 230 establishes two forms of immunity. The first declares that users or providers of online platforms will not “be treated as the publisher or speaker of any information provided by another information content provider.”37 This immunity is directed specifically at the First Amendment dilemma that arose from Cubby and Stratton Oakmont.38 The second immunity is intended to promote the statute’s Internet safety goals. It protects “any action voluntarily taken in good faith to restrict access to . . . material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”39 Together, the provisions create a statutory scheme in which an online platform is not liable to those it censors, nor is it liable for any failure to remove illegal or indecent content.40

Two opinions from the Seventh Circuit Court of Appeals applying Section 230 demonstrate both its wide reach and the potential dilemmas that arise from such a broad immunity. First, in Doe v. GTE Corp., the Seventh Circuit Court of Appeals upheld Section 230 immunity for an Internet service provider that hosted a website selling secretly filmed nude videos of college athletes.41 Since the objectionable content was provided by a third party, subsection (c)(1) applied and shielded GTE from liability “under any state-law theory to the persons harmed by [the third party’s] material.”42

Although (c)(2) immunity was not implicated—since GTE made no effort to remove content—the court explored in dicta the notion that Section 230 immunity might actually disincentivize “Good Samaritan” content moderation.43 The court reasoned that, since platforms are protected even when they do not act, there is no incentive to undertake the costs of content moderation.44 The effect is that platforms do little to remove indecent content while still enjoying editorial immunity—an outcome that the court found to be inappropriate for a section of the CDA.45

The court explored various constructions of Section 230(c) that might bring the statute’s effect back in line with its purpose.46 One such reading involved treating subsection (c)(1) as a definitional clause, rather than a general immunity.47 This interpretation would treat “provider or user” as a status (in contrast to “speaker or publisher”) that would entitle the party to immunity under subsection (c)(2).48 However, under this interpretation, (c)(2) immunity would effectively swallow (c)(1) immunity and leave platforms exposed to state law liability.49 An alternate construction, the court suggested briefly, would limit the scope of (c)(1) immunity to “liability that depends on deeming the ISP a ‘publisher,’” like defamation.50

Ultimately, the court had no reason to conclude which interpretation of Section 230(c) was better because the plaintiffs’ claims under the Electronic Communications Privacy Act—that GTE had aided illegal activity by “intercepting” the objectionable content51—did not require a determination of whether GTE acted as a publisher.52 Plaintiffs also could not show that GTE had a duty, statutory or otherwise, that would implicate Section 230 protections.53 In a way, the court evaded the Section 230 dilemma it identified by avoiding the question entirely.

The Seventh Circuit adopted a similarly narrow reading of “publisher” in Chicago Lawyer’s Committee for Civil Rights Under the Law v. Craigslist, Inc., in which the court upheld (c)(1) immunity for Craigslist when it was sued over discriminatory rental housing advertisements posted by third parties in violation of the Fair Housing Act.54 While the court acknowledged that the advertisements were actionable under the Fair Housing Act,55 Craigslist was nonetheless immunized by Section 230(c)(1) because the content was provided by third parties and it was acting in the capacity of a publisher.56

The court was careful to note that Section 230(c)(1) is not a “general prohibition” of liability for platforms and seemed to adopt the narrow interpretation that (c)(1) immunity is limited only to causes of action that require a finding that the defendant acted as a publisher.57 It concluded that the Fair Housing Act imposed liability only if Craigslist had acted as publisher or speaker of the illegal advertisements.58 Given the protection of Section 230(c)(1), Craigslist could not be treated as the speaker or publisher of third-party content, so the court found no liability under the Fair Housing Act.59

C. Applying Section 230 to Discrimination Claims

Courts have used both subsections of Section 230(c) in tandem to find immunity for platforms being sued for removing content rather than hosting it. For example, in Domen v. Vimeo, Inc., a religious nonprofit sued Vimeo when the video hosting site removed content that violated its user agreement.60 The nonprofit posted videos advocating for Sexual Orientation Change Efforts, which were specifically prohibited by the terms of use, and Vimeo removed the content.61 The nonprofit sued, alleging, inter alia, religious discrimination and free speech violations under state and federal law.62

Applying subsection (c)(1), the district court used a three-part test for immunity that examined whether “the defendant (1) is a provider or user of an interactive computer service, (2) the claim is based on information provided by another information content provider, and (3) the claim would treat the defendant as the publisher or speaker of that information.”63 Although the plaintiffs insisted they did not seek to hold Vimeo liable for distributing their videos, the court concluded that Vimeo acted as a publisher because removing content falls in the realm of traditional editorial functions.64 The court also found that subsection (c)(2) applied directly to Vimeo policing content that violated its terms of use.65 Although the plaintiffs alleged that Vimeo had not acted in good faith, they failed to plead evidence sufficient to support that claim and the court declined to address the question.66

Many courts are hesitant to address the meaning of “good faith” when applying Section 230, fearing that an inquiry into the permissibility of a platform’s motive would lead to problematic outcomes.67 Excessive inquiry into a platform’s motive or procedure in removing content (or not) could expose platforms to a flood of claims that would contradict the very purpose of Section 230.68 Furthermore, the exposure of a platform’s content moderation decision-making process to judicial review would likely disincentivize taking any action at all, rendering Section 230 immunity meaningless.69 When faced with this choice, courts prefer to construe subsection (c)(2) broadly, in favor of immunity.70

II. Examining the Options

A. Legislative Proposals

Legislative proposals seeking to address problems with Section 230 were abundant during the 116th Congress.71 The following subsection reviews two of the most common approaches to fixing Section 230, as well as one notable proposal that goes beyond amending the text of the statute to create an entirely new framework. My analysis of these legislative solutions will inform the proposal I set forth below in Part IV.

1. Conditional (c)(1) Immunity

Making (c)(1) immunity conditional is one of the most common approaches to resolving the perverse incentive that platforms can benefit from Section 230’s immunity without having to take any action to further the statute’s goal of Internet safety.72 One Senate bill, the Stopping Big Tech’s Censorship Act (S. 4062), makes (c)(1) immunity conditional upon taking “reasonable steps” to prevent unlawful use of the platform.73 The bill defines “unlawful use” to include “cyberstalking, sex trafficking, trafficking in illegal products or activities, [and] child sexual exploitation.”74 Another, the Limiting Section 230 Immunity to Good Samaritans Act (S. 3983), conditions immunity on the platform’s maintenance of written terms of service.75

Whereas S. 3983’s conditional immunity is directed at ensuring platforms’ evenhanded enforcement of their terms of use, S. 4062 attacks the types of illegal content that Section 230 was initially intended to address.76 By adding a requirement of active content moderation to (c)(1) immunity, S. 4062 creates a connection between the two subsections where before there was none—at least not explicitly.77 Both bills deal generally with the perverse enforcement incentive by imposing a condition on (c)(1), but they diverge in terms of policy goals. S. 4062 seeks to incentivize platforms’ pursuit of Internet safety, while S. 3893 seeks to enforce a policy of transparent terms of use and evenhanded enforcement.78

Irrespective of its policy goals, this approach faces criticism on the grounds that it would be an unconstitutional condition imposed by the government on a protected right.79 At least one academic proposal suggests treating content moderation as a form of editorial discretion over a private forum, which is protected from government interference by the First Amendment.80 A condition on Section 230 immunity would require platforms to give up their protected right to moderate the content of a private forum or lose a benefit—immunity—that is necessary for their survival.81 One scholar supports such a solution on the grounds that the government’s interest is not compelling enough to justify inducing platforms to give up their right to editorial discretion, and that “[s]uch proposals should not become law.”82

While it is certain that a platform’s right to make editorial decisions free from government interference is central to freedom of the press, the prevailing scholarly assumption is that the First Amendment does not require Section 230 immunity.83 This weakens the case for an unconstitutional condition because it lessens the importance and necessity of the benefit at issue—here, Section 230 immunity.84 If the content moderation decisions that a platform is making would have been protected under traditional editorial discretion doctrine, then immunity is not necessary to protect that right and a condition on immunity would not be unconstitutional.85

2. Narrowing the Scope of (c)(2)

In an effort to limit platforms’ wide editorial latitude, some lawmakers have sought to define “good faith” in subsection (c)(2) and to narrow the scope of “objectionable” content that a platform can remove or restrict with immunity.86 Currently, under subsection (c)(2), a platform is protected if it acts in “good faith” to remove or restrict content that the platform believes is obscene, illegal, or “otherwise objectionable.”87 This standard gives platforms nearly unlimited discretion to remove with impunity whatever content it finds “objectionable,” as long as it does so in “good faith.”88

One proposed amendment, the Protect Speech Act (H.R. 8517), limits Good Samaritan protection to the removal of content that the platform has an “objectively reasonable belief” is obscene, violent, or illegal.89 This change is significant because it redirects the focus of the analysis from the platform’s motivation in restricting the content to the platform’s belief that the content was obscene and whether that belief was reasonable. The bill also redefines “good faith” to include publicly available terms of service “that state plainly and with particularity” the platform’s content moderation policies and practices.90 It also requires that platforms apply their terms of service evenhandedly and give users notice describing “the reasonable factual basis” for restricting or removing the content.91 In doing so, H.R. 8517 seeks to create more transparency and accountability regarding content moderation.92

By requiring consistent application of the platform’s terms of service or community guidelines, H.R. 8517 creates a First Amendment dilemma where there already are too many.93 A consistency requirement would force platforms to remove all the content—and only the content—that violates terms of service or community standards, acting as a restraint on the content a platform may moderate.94 However, these sorts of editorial decisions are likely protected by First Amendment doctrine that prohibits the government from requiring newspapers to maintain politically neutral spaces.95

An additional pitfall of a consistency requirement is that it would require a government agency to make determinations of whether a platform acted appropriately in restricting content.96 That notion is problematic because the monitoring agency, likely the Federal Trade Commission, is probably not an expert in social media content moderation.97 Additionally, it would be tremendously costly for a government agency to review even a fraction of the content moderation decisions that a social media platform makes.98

3. A New Framework

Another groundbreaking proposal is the bipartisan Platform Accountability and Consumer Transparency Act (PACT), which sidesteps Section 230 in favor of a new framework.99 First, PACT requires platforms to adopt and publish an “acceptable use policy” that “reasonably inform[s] users about the types of content” permitted and prohibited.100 The platform must also establish a complaint process by which users can report objectionable content or protest the platform’s decision to remove content.101 The bill requires platforms to submit quarterly reports on their content moderation practices and makes violation of these terms punishable under the Federal Trade Commission Act.102 Additionally, PACT creates an exception to Section 230 for platforms that have knowledge of illegal content or activity but do not make an effort to stop the illegal use within twenty-four hours of receiving notice.103 This requirement is not as far-reaching as it appears, however, since proper notice of illegal content requires a court order specifying that the content in question violates state or federal law.104

The PACT avoids many of the pitfalls that other legislative proposals encounter, like unconstitutional conditions or limits on platforms’ editorial decision-making, by imposing regulations on process instead of content.105 This is also the bill’s weakness, as process requirements like a quarterly transparency report will require a substantial amount of agency oversight to administer.106 Scholars have also criticized the court-ordered takedown requirement as being susceptible to abuse by frivolous claimants.107 However, the approach of creating narrow exceptions in Section 230 immunity—such as for content the platform knew was illegal—is an effective step towards requiring platforms to moderate content more actively while avoiding First Amendment conflicts.108

B. Executive Proposals

As discussed in the Introduction, President Trump issued an Executive Order in May 2020 requesting administrative rulemaking to clarify the meaning of “good faith” in subsection (c)(2).109 The Order also suggested criteria for defining good faith, including whether the actions are “deceptive, pretextual, or inconsistent with a provider’s terms of service” and whether the platform provided clear notice to the user whose content was removed.110 Shortly after the Order was issued, the Center for Democracy and Technology (CDT) filed a lawsuit in federal court challenging the Order as retaliatory and in violation of the First Amendment.111 The court dismissed on the grounds that CDT failed to demonstrate an imminent injury resulting from the Order.112 Furthermore, the court found that CDT’s First Amendment claim was unripe for adjudication because the Order did not prescribe law but, instead, directed federal agencies to take actions that might eventually lead to the law CDT claimed was unconstitutional.113

In September 2020, following President Trump’s Executive Order, the Department of Justice (DOJ) submitted a legislative proposal based on its own study of Section 230.114 Its conclusions, summarized in a letter sent to Congress, identify many of the same changes proposed by the bills discussed above, including defining “good faith”115 and narrowing the content that can be removed with immunity under subsection (c)(2).116 It also conditions subsection (c)(1) immunity on “act[ing] in good faith and abid[ing] by [a platform’s] own terms of service and public representations.”117

However, the DOJ’s recommendations expand on the congressional proposals in several key respects. First, the letter seeks to “clarif[y] the interplay” between subsection (c)(1) and subsection (c)(2) immunity and finds “that platforms cannot use [Section] 230(c)(1) as a shield against moderation decisions that fall outside the explicit limitations of [Section] 230(c)(2).”118 This change is directed specifically at cases like Vimeo, where courts use subsection (c)(1) to immunize a platform that removes content by construing “publisher” to include editorial decisions.119 Implicit in this recommendation is the notion that subsection (c)(1) is intended to apply when a platform does not remove content and subsection (c)(2) applies when a platform does.120

The DOJ deepened the divide between subsections (c)(1) and (c)(2) by also recommending an amendment to subsection (c)(1) to clarify that a content moderation decision, made in good faith and consistent with a platform’s terms of service, does not automatically make that platform a publisher.121 This amendment serves to reinforce the belief that subsection (c)(1) and subsection (c)(2) immunity should not overlap—that a platform should not be protected under subsection (c)(1) for restricting or removing content that would not otherwise be subject to subsection (c)(2) immunity.122 In effect, application of subsection (c)(1) immunity would be limited to liability that required a determination of whether the platform published—or merely distributed—the offending third-party content.123

Lastly, the DOJ proposal also includes three new exceptions to (c)(2) immunity for “platforms that (1) purposefully promote, facilitate, or solicit third-party content that would violate federal criminal law; (2) have actual knowledge that specific content it is hosting violates federal law; and (3) fail to remove unlawful content after receiving notice by way of a final court judgment.”124 These exceptions are intended to further Section 230’s original purpose of promoting Internet safety by creating “carve-outs” in (c)(2) immunity for platforms that fail to take appropriate action.125

III. Structuring a Proposal

Before proceeding to a proposal, it is necessary to lay out some legal and practical guard rails in order to structure a proposal that achieves the purposes of the CDA—free speech and a safer Internet—while steering clear of constitutional pitfalls.126 This Part lays out five factors that are key to formulating the proposal that follows in Part IV. These factors are distilled from court decisions,127 legislative and executive proposals,128 and scholarship,129 and represent obstacles that have arisen in seeking to amend Section 230: the First Amendment, the mode of implementation, conditions on immunity, “overlapping” immunity, and defining “good faith.” These considerations will serve both as limits and as objectives for a proposal that seeks to promote the original purposes of Section 230 while navigating the litany of pitfalls that accompany government regulation of online content moderation.130

A. Preliminary First Amendment Considerations

Any attempt at regulating platforms’ content moderation practices risks First Amendment challenges on the grounds that the platforms’ acts of content moderation—often carried out by algorithms—are themselves a form of protected speech.131 Professor Kyle Langvardt suggests that social media platforms would frame content moderation as an editorial decision akin to the editorial discretion often exercised by a newspaper.132 Social media platforms likely “occupy the high ground” with this argument, Langvardt asserts, since the First Amendment as applied to newspapers has protected editorial decisions from government interference.133 This even prohibits requirements that newspapers maintain a politically balanced op-ed space.134 In short, the government likely could not require social media platforms to moderate content in a politically neutral manner.

This challenge is compounded by the premise that online platforms are private companies and not state actors or common carriers, so they are not bound by traditional First Amendment requirements, like viewpoint neutrality.135 Although private entities, state actors are agents of the government and, consequently, are bound by the same First Amendment limits as the government itself.136 To be considered a state actor, an entity must either “perform[] a traditional, exclusive public function,” or act in conjunction with, or under the compulsion of, the state.137 Few, if any, social media platforms would meet these criteria.138 A related argument is that social media constitutes a public forum, the access to which is protected by the First Amendment.139 However, public forum doctrine is more relevant to situations when government officials block users from their own social media profiles, since the doctrine applies only to situations in which the government limits access to the forum.140

Taking these constitutional considerations together, it becomes clear that any government regulation of online content moderation practices will operate within narrow boundaries. Langvardt reminds that these constraints are not permanent, but subject to change based on the doctrinal inclinations of the Supreme Court.141 Nevertheless, the current situation makes clear that online platforms should be—and must be—regulated with a subtle hand to avoid running into constitutional challenges.

In February 2021, Florida Governor Ron DeSantis proposed more aggressive legislation that would impose a fine of $100,000 on social media platforms that ban the accounts of political candidates.142 Although the bill was approved by the Florida legislature, a federal court issued an injunction blocking the law from taking effect on the grounds that the First Amendment protects social media platforms against being forced to host political content.143 In Miami Herald Publishing Co. v. Tornillo, a newspaper challenged a regulation requiring it to give politicians a “right of reply” when criticized in editorials.144 The Supreme Court held that mandating the publication of a reply article interfered with the newspaper’s protected right to choose the content it publishes.145 With respect to the 2021 Florida law, the district court held that requiring social media platforms to host political candidates’ content is the twenty-first century equivalent of requiring a politically neutral op-ed page.146

B. Why the Only Solution Is a Government Solution

Given the narrow First Amendment limits, it might appear that legislation is not the best approach to resolving the Section 230 dilemma. Surveying three alternate legal approaches—common law, federal administrative regulation, and state-level regulation—Langvardt concludes not only that federal legislation is appropriate, but that Congress is the only branch of government with the authority to restrict content moderation by private entities.147 He first argues that a common law solution is untenable because it would require courts to loosen the state action doctrine when applied to private speech in order to hold platforms accountable to First Amendment requirements.148 Langvardt also dismisses administrative rulemaking that might reclassify social media platforms as common carriers—and therefore subject to content regulation—on the grounds that they cannot be analogized easily to services like the telephone or broadband Internet.149 Finally, any state-level attempts to “punish overzealous content moderation” would likely face federal preemption and challenges under the dormant commerce clause of the Constitution.150

Langvardt maintains that government intervention is especially desirable because there are few alternative platforms available for users trying to escape overly restrictive moderation.151 Although modern technology gives the appearance of a nearly infinite realm of possibility, the reality in social media is that the vast majority of speech happens on a small group of social networks, owned by an even smaller group of tech conglomerates.152 Out of approximately 3.5 billion social media users worldwide, Facebook commands more than 2.3 billion monthly active users (MAU).153 In second place is YouTube, owned by Google, with nearly 2 billion MAU.154 Instagram, also owned by Facebook, has about 1 billion MAU.155 In short, more than two-thirds of all social media users in the world use a single platform, Facebook, which also owns the third-most popular platform.156 Twitter, not owned by Facebook or Google, pales in comparison with a measly 330 million MAU.157

To further illustrate the lack of meaningful competition in social media, consider the rash of social media bans against President Trump following his supporters’ riot at the U.S. Capitol on January 6, 2021.158 Twitter permanently banned Trump on January 8, claiming that Trump had violated the platform’s policy against threatening or glorifying violence.159 Facebook, Instagram, YouTube, Snapchat, and others also banned Trump for various lengths of time.160 Additionally, Google, Amazon, and Apple banned Twitter alternative Parler from their app stores on the grounds that the platform was used extensively to promote the Capitol riots and other violence.161 Parler is a particular favorite amongst Trump supporters and other right-wing social media users because its content moderation policies are held out as being more permissive than mainstream platforms’policies.162

The case of Trump’s mass de-platforming demonstrates that widespread collective action against a common persona non grata can result in the exclusion of that person (or groups) from the market for social media.163 Additionally, by banning Parler, Google, Amazon, and Apple also removed the primary alternative to “Big Social Media,” narrowing the market and raising barriers to entry for those alternatives that might try to compete.164 This illustration of the lack of alternatives to Big Social Media serves to identify the risk that voluntary measures may be ineffective due to the lack of meaningful competition amongst social media platforms and to underscore the necessity of a legislative solution.

By way of comparison, Professor Edward Lee recently put forward a proposal that advocates for a voluntary, uniform code of nonpartisan content moderation.165 Finding that most platforms’ policies already embrace varying degrees of nonpartisanship, Lee suggests expanding the existing framework to include a standardized protocol for reviewing and appealing platforms’ editorial decisions.166 This system would also utilize trained moderators to review content, transparency reports that include data about the platform’s moderation activity, and independent audits to provide expert feedback.167

Langvardt and Lee agree that a major downside to an extensive government regulatory scheme is that it would require substantial—almost excessive—agency interference in platforms’ day-to-day operations.168 One example of such a proposal is Senator Josh Hawley’s Ending Support for Internet Censorship Act, which requires the Federal Trade Commission to establish a process by which it certifies that platforms do not engage in politically biased content moderation, thereby entitling them to Section 230 immunity.169 Not only would it require substantial resources for the Federal Trade Commission to review the content moderation activity of every online platform seeking immunity,170 but it would also create an “immensely important quasi-constitutional institution” with broad-reaching, undefined powers.171

A strength of Lee’s proposal is that it avoids the issue of government overreach or intrusion that might result from too aggressive a regulatory scheme.172 The flipside of such a voluntary code is that it relies on the goodwill of quasi-monopolies to undertake the costs of adopting and implementing the code, including the three-level, double-blind review process Lee proposes.173

C. “Something for Nothing”

Platforms’ ability to immunize themselves entirely from liability without having to make any effort to advance the purposes of Section 230 has created a “something for nothing” dilemma, in which platforms enjoy the benefits of immunity without having to screen for illegal content.174 In a way, the dilemma encompasses both parties’ gripes with Section 230. On one hand, platforms may moderate content however they see fit, leading to claims of viewpoint discrimination and anti-conservative bias on the right.175 On the other, platforms are protected by Section 230 regardless of whether they undertake to remove indecent or illegal content—one of the left’s major complaints.176

The most common solution put forward to deal with the “something for nothing” dilemma is to impose a condition on (c)(1) immunity: either to moderate content in a nonpartisan manner or to engage in proactive moderation of indecent and illegal content.177 Although such conditions are admittedly a direct route to holding platforms accountable for Section 230’s various policy objectives, they are also vulnerable to constitutional challenges on the grounds that they impermissibly restrict platforms’ free exercise of their First Amendment right to monitor content on a private forum.178

A solution that avoids the constitutional issue while still addressing the dilemma might be to create carve-outs to immunity based on certain behaviors that both sides agree are harmful.179 Some examples of such carveouts are the Fight Online Sex Trafficking Act (FOSTA) and the Stop Enabling Sex Traffickers Act (SESTA), a pair of anti-sex trafficking bills passed in tandem in 2018 that remove Section 230 immunity if third parties use a platform to advertise or solicit sex work.180 Detractors of FOSTA-SESTA argue that it imposes a costly burden of moderation on platforms and punishes them based on the unpredictable behavior of their users.181 Other classes of carve-outs might include exceptions to immunity for platforms that host content created by certain specified terrorist organizations or platforms that do not limit foreign propaganda.182

A final consideration when implementing the carve-out solution to the “something for nothing” dilemma is that it is better directed at the goal of preventing illegal uses of platforms than of preventing politically-biased moderation.183 Government regulation of speech based on its content (as compared to regulations regarding its “time, place, and manner”) must survive a stricter degree of judicial review that looks to whether the regulation is “narrowly tailored to serve a compelling government interest.”184 In fact, Section 230 itself is the sole remnant of an encounter with strict scrutiny: the rest of the CDA was ruled an overly vague content-based regulation.185 Since it is well-settled that the government cannot require the publication of politically neutral content, any carve-out that would limit a platform’s editorial discretion—particularly with respect to political content—already has the weight of authority against it and would not likely pass constitutional muster.186 By contrast, Section 230 already contains several carve-outs for the enforcement of state and federal laws, including intellectual property, communications privacy, and sex trafficking laws.187

D. Separating the Overlapping Immunities

As illustrated by some of the leading opinions dealing with Section 230 immunity, there is an apparent overlap in the way that courts apply subsections (c)(1) and (c)(2), especially to claims of discriminatory content moderation.188 Although (c)(1) is commonly recognized as the subsection to protect platforms against speaker or publisher liability for third-party content, courts frequently read “publisher” in (c)(1) to include any actions the platform took that would constitute editorial decisions.189 By including content moderation as a form of editorial decision, platforms could theoretically be immunized under (c)(1) even if their action was not in “good faith” or not directed at a type of objectionable content specified in (c)(2).190

Overlapping immunity strays from one of the original intentions of Section 230, which was to resolve the “moderator’s dilemma” that arose from the Cubby and Stratton Oakmont decisions.191 A prevailing judicial interpretation of Section 230 is that subsection (c)(2) protects the platform when it chooses to remove content—addressing Stratton Oakmont—and (c)(1) protects the platform when it refrains from moderating, as in Cubby.192 Allowing (c)(1) immunity for acts of content moderation destroys the difference between the two subsections and grants immunity with an overly broad stroke.193 The DOJ recommendations deal with this overlap most directly, decoupling the two subsections with a term specifying that the removal of content pursuant to subsection (c)(2) does not necessarily make the platform a publisher of all other third-party content.194 This term would consolidate immunity for content removal under (c)(2) by excluding moderation decisions from the scope of (c)(1) immunity.195

To effectively separate the two immunities, however, a proposal must also narrow the scope of “otherwise objectionable” content in (c)(2).196 Courts have read this phrase broadly, granting essentially unlimited discretion to platforms to remove content as they see fit.197 This change can be accomplished simply, by removing “otherwise objectionable” and replacing it with more specific qualifications of the content a platform can remove with immunity.198 The question of how tightly to narrow the scope of (c)(2) is a policy decision, but the essence of the revision is that the determination of “objectionable” should not be left to the platform’s discretion.199

E. Defining “Good Faith”

The issue of defining “good faith” has been a perennial issue both for courts applying Section 230 and for lawmakers trying to amend the statute.200 A primary concern is that the good faith standard is often ignored, and that platforms censor those who express politically unpopular viewpoints.201 Many proposals seek to define “good faith” in terms of a platform’s evenhanded, transparent, and politically neutral enforcement of its terms of service.202 As a favorable example, the DOJ recommendations provide a detailed, four-point framework for assessing whether a platform has acted in good faith.203 A strength of this framework is that it avoids constitutional issues by only regulating the manner of enforcement, not what kind of content the platform may remove.204

One area of concern, however, is the recommendation that a good faith content removal requires an objectively reasonable belief that the content falls within one of the categories specified by (c)(2)(A).205 This recommendation introduces a reasonable person standard into the good faith analysis in order to promote greater neutrality and transparency in moderation decisions.206 However, the language proposed by the DOJ seems to focus more on the platform’s belief—and whether it was objectively reasonable—than whether the content removed could reasonably be understood to violate the terms of use. Furthermore, courts fear that excessive inquiry into a platform’s motive or belief while removing content may lead to problematic or unconstitutional outcomes.207 Perhaps a simple solution to this concern is to remove this requirement entirely, since it appears redundant of several of the DOJ’s recommended good faith factors.208

As a supplement to its definition of good faith, the DOJ also recommends creating a carve-out in (c)(2) immunity for “Bad Samaritans” who promote or facilitate illegal content while still enjoying Section 230 immunity.209 Given the constitutional concerns addressed above, the creation of carve-outs in the blanket immunity is a favorable approach because it allows for narrow exceptions that do not tread on platforms’ editorial discretion.210

IV. Proposal: PACT Plus

Of all the government solutions set forth in Part II, the bill that fits best within the guiderails is PACT. Its primary strength is that it avoids First Amendment concerns about infringing on platforms’ editorial decisions by regulating process, rather than content.211 Additionally, it uses narrow carve-outs to remove Section 230 immunity from platforms that do not police illegal content—an effective means of sidestepping concerns about unconstitutional conditions.212 As the need for new exceptions to immunity arises, legislators can add narrow carve-outs to address the need while being careful to avoid constitutional issues.213 That said, PACT has its weaknesses, including a need for excessive agency oversight and the potential for abuse by frivolous claimants.214 There are also serious questions about whether the transparency reports required by the bill will be useful in monitoring platforms’ content moderation practices.215

While agency oversight presents practical and constitutional pitfalls, it is a necessary evil that can be limited through specific constraints on the agency’s power to review platforms’ content moderation decisions.216 Ideally, the agency’s review would not ask whether the content at issue indeed violated the platform’s community standards, but whether the platform acted in accordance with its published terms of use. Although this opens the door for platforms to adopt and publish a rule of essentially unlimited power to remove content, users could discover this through transparency reports (or by simply reading the platform’s terms of use) and choose to use an alternative platform.217

PACT maintains Section 230 immunity in parallel with its new framework, so it is necessary to include changes to that immunity in order to achieve fully the legislation’s goals.218 A “PACT Plus” proposal should—and does—include carve-outs to limit (c)(1) immunity.219 These carve-outs are essential to ensure that restrictions on content moderation are not so broad as to be unconstitutional.220 An effective proposal must also separate the “overlap” of subsections (c)(1) and (c)(2) so that content moderation decisions are not immunized under both.221 This can be accomplished first by specifying that content removals pursuant to (c)(2) do not, on their own, render a platform a publisher under (c)(1).222 Then, a proposal should narrow the scope of (c)(2) protection by removing the “otherwise objectionable” language that gives platforms nearly unlimited discretion to remove content.223

Finally, a proposal must define “good faith” with a set of clear criteria to aid courts in assessing the nature of platforms’ actions in removing content.224 The DOJ recommendations also set forth a particularly useful four-part test for good faith that seems to avoid constitutional issues by looking primarily at the platform’s application of its terms of use.225 By tying the definition of good faith to the way a platform enforces and adheres to its terms of use, the DOJ’s test helps reinforce the PACT transparency framework.226 In effect, platforms’ immunity to remove content will be tied to their adherence with their own terms of use.

Conclusion

In the leadup to his inauguration in January 2021, President Joe Biden resumed calls for lawmakers to repeal Section 230 entirely, citing overreach and propagation of false information by Facebook and other social media platforms.227 Although President Biden’s approach is contrary to those put forward by lawmakers and Trump’s administration, it shares a common motivation: to restrict platforms’ unlimited immunity to remove content on a whim.228 The 116th Congress produced a wealth of options for reforming Section 230 that fall short of repealing the statute entirely, and the Biden Administration would do well to take them under consideration.229 One standout option, PACT, leaves Section 230 immunity in place, albeit with an exception for failure to comply with a court order, and builds a new framework dedicated to content moderation transparency.230 PACT nimbly avoids constitutional issues by requiring due process in content moderation instead of regulating what platforms can and cannot moderate.231 However, the bill should be amended to include certain key changes to the text of Section 230 that will make a considerable impact in reining in the nearly unlimited immunity to moderate content that platforms currently enjoy.232 Between these two strategies—process requirements and more limited immunity—the Congress can restrict platforms’ wide reach while still accomplishing the original goals of Section 230: preserving free speech and creating a safer Internet.233

With the threat of increased regulation looming on the horizon, stakeholders are starting to feel the pressure and act on their own. On January 7, 2021, Facebook’s Oversight Board reviewed Facebook’s decision to restrict then-President Trump’s access to his account indefinitely.234 The Board found that a temporary suspension was appropriate but that an indefinite ban was excessively harsh, and remanded to Facebook to render a proper penalty.235 Founded in 2019, the Board is comprised of twenty academics, former politicians, and activists who have the power to rule against Facebook’s moderation actions.236 The Board will soon add twenty more members, and is being touted as a vehicle for reforming the social media platform.237

Taking a different tack, former President Trump launched an “alternative” social media platform in July 2021.238 The new app is called “GETTR”—an amalgamation of “getting together”—and advertises itself as “a non-bias social network for people all over the world.”239 Its mission is to fight “cancel culture” and promote free speech.240 Ultimately, the platform has not enjoyed as much success as the former President anticipated: shortly after its launch, the app was attacked by hackers and more than 85,000 email addresses were stolen.241 Time will tell if others try to start their own free speech-focused platforms.


* Podcast Editor, Cardozo Law Review; J.D. Candidate (May 2022), Benjamin N. Cardozo School of Law; B.A., Emory University, 2015. I am tremendously grateful to my faculty advisor, Professor Deborah Pearlstein, and to Cardozo Law Review editors past and present for their wisdom and guidance as I navigated the process of creating a work of legal scholarship. Thank you also to my parents, Andrea and Patrick Bradley, for raising me to think like a lawyer. Semper certa, interdum vera.