Disinformation on Trial: Fighting Foreign Disinformation by Empowering the Victims

Foreign disinformation catapulted into the national spotlight with the 2016 presidential election, but its impact is not confined to the electoral map or season. This Article addresses the threat of foreign disinformation by proposing a new statute: a private right of action, enabling harmed persons to directly sue state or private actors, foreign or domestic, who knowingly or recklessly spread disinformation from abroad. Scholars and policymakers have proposed other, far-flung solutions ranging from greater online security to outright censorship. Each of those ideas stumbles on common challenges and lacks a valuable ingredient: an interested party, directly harmed by the foreign campaign, who benefits from a solution and thus has a motivation to act. This proposal adds to the arsenal and grants benefits found nowhere else: public notice of foreign interference; a tool to restrain domestic accomplices who spread disinformation; and a moral, if not always financial, payoff for victims.

Introduction

You open a website. A bright red banner flashes: “Breaking News.” This page usually covers political controversies, misdeeds by the politicians destroying America. But this story is headlined, “13-Year Old Girl Brutalized; Illegal Immigrants Abduct and Gang Rape Local Honors Student.” Last night, the girl’s parents reported her missing. Now, an online video shows an interview with a woman claiming to be the girl’s aunt, crying about “immigrant gangsters.” The video goes viral. Facebook users share it over a million times. The original site notes that the police are not investigating. Other websites, some seemingly fly-by-night, call for protests. Trustworthy community advocates begin retweeting those calls. Soon, at the behest of faceless online organizers, thousands gather at police headquarters calling for the chief to resign.

But none of it, other than the girl—who in fact had run away to visit a boy—and the protests, is real. Not the abduction, the aunt, and certainly not the attack by immigrants. This episode, these facts, played out beat-by-beat in Germany in 2016,1 inflaming already rife anti-immigrant tensions and weakening the vulnerable governing party.2 The so-called “Lisa case” was an early Russian disinformation effort that all began with a fake report on a Russian-owned news channel.3

In America, foreign disinformation catapulted into the spotlight with the 2016 presidential election, but its impact is not confined to the electoral map or season.4 As with Germany’s “Lisa case,” the real-world harm can extend beyond sore feelings,5 and the costs can escalate quickly.6 The similar though distinct threat of domestic disinformation dominated the 2020 election cycle,7 but foreign disinformation remains an ongoing danger.8 More so, it poses a bold new front in today’s Great Power conflict because, unlike traditional interstate espionage in which harm to nongovernmental interests or officials is rare, this threat sees the public—media, private individuals, and civic norms—as its primary target.

Since the 2016 election, scholars and public officials proposed many solutions, ranging from greater online security9 and news trustworthiness ratings,10 to outright censorship.11 The response so far—by government, the technology sector, and civil society—has floundered.12 While government investigations have uncovered the breadth of previous attacks, institutions have not secured their IT systems against foreign intrusion,13 tech companies have zigzagged in a much-derided series of changes,14 and some entities have taken drastic and potentially unconstitutional steps.15 These efforts expose the enormous difficulties in responding—from First Amendment protections to entrenched, opposed interests—and leave the playing field open for better solutions.

This Article enters that breach and advances a supplementary approach, one that avoids many of the fault lines and, more significantly, adds a valuable ingredient that the others lack: an interested party directly harmed by the foreign interference who materially benefits from a solution and has a motivation to act. This Article proposes a statutory private right of action enabling harmed persons to directly sue state or private, and foreign or domestic, actors who knowingly or recklessly spread foreign disinformation. Fueling real-world protests with false accounts of a little girl’s rape, as in the Lisa case, violated her privacy, turned her attempt to hide a secret pre-teen romance into a national story, and left her reputation in tatters.16 As that event showed, the impact from these attacks can impose clear and lasting material costs, on top of the diffuse, harder-to-define harms to civic trust. This Article argues for a solution that takes on this broad danger by empowering individuals who suffer directly.

A private right of action will not end disinformation. Civil litigation’s demands and inefficiencies will limit its use. Challenges include proving harm and collecting awards. But this approach grants benefits found nowhere else: public notice of foreign interference, especially important when government leaders fail to act or actively deny it; a tool to restrain domestic accomplices who spread disinformation; and a moral, if not financial, resolution for victims. The private right of action adds crucial elements in the broader response.

This Article proceeds in four Parts. Part I defines the problem, revealing what is meant by disinformation and distinguishing it from other forms of controversial rhetoric. Part I also illustrates disinformation’s historical use, shows how it has evolved, and establishes why we must tackle the threat. Part II then describes how policymakers are addressing it. It highlights the most intriguing proposals and illustrates the challenges they face. With that foundation laid, Part III presents this Article’s solution. It carefully describes each element of the new right of action and lays out the statute’s goals and limits. Finally, Part IV hardens support for the proposal by showing how its elements are constructed to overcome the obstacles. Here, the Article recruits an analogous suite of laws in a related context: antiterrorism. By building off those laws’ evolution and judicial application, the proposed disinformation statute can avoid many of the traps the antiterrorism laws suffered.

Disinformation is a complex knot, coopting and harming core democratic interests. Addressing its challenges will be hard enough that it is tempting to instead resign oneself to the costs. This Article argues that no all-encompassing response will work, and instead, leaders must fight on multiple fronts. The proposed private right of action can be a key tool in that campaign.

I. The Threat

Parts I and II of this Article lay out the disinformation playing field, allowing the remaining Parts to argue the statute’s purpose, application, and effectiveness. Section I.A defines what we mean by disinformation and distinguishes it from other types of controversial speech. Section I.B then shows that disinformation is an age-old problem, but today, several factors turn what had been a chronic concern into a clear and present danger.

A. What We Mean by “Disinformation”

Before confronting the challenge of distinguishing disinformation from real news comes the important step of defining disinformation. It is popular in the current political era to bandy about the phrase “fake news.”17 Even seasoned media analysts and academics use it.18 But it is far more helpful to divide harmful information into three types: misinformation, mal-information, and disinformation.19

Misinformation, in this typology, “is information that is false, but the person who is disseminating it believes that it is true.”20 Without yet weighing the issue of mens rea—whether an actor knows her words are untrue—an example of misinformation is a reporter whose claims in response to breaking news later turn out to be wrong. An example of widespread misinformation occurred in 1995 after the Murrah Federal Office Building attack in Oklahoma City, when numerous otherwise reliable news sources initially blamed Middle Eastern terrorists, when in fact the plot was domestic.21 Key to the definition is that the speakers’ errors were not intentionally malicious.

A second category is mal-information. Mal-information is a statement that is primarily true but the issuer spreads it in order to cause harm.22 This can also be a more localized form of harmful information, one meant to be heard by only a few people. An example is a speaker revealing damaging facts about a competitor in order to tarnish the target’s reputation or material well-being, such as leaks about criminal conduct or the target’s closeted lifestyle.23 Mal-information, described simply, is the release of factual information to commit a harm.

The third category is disinformation. These are statements that are either partially or entirely false and known to be false by the person who is disseminating them; thus, she is maliciously lying in order to cause harm.24 Here, the harm and falsity factors combine. This is the category that encompasses foreign influence campaigns.

One must recognize that a given piece of information can move between categories depending on the user’s intent, and the same statement might require a different response as it appears in different forms. An example of a statement that could be defined all three ways, depending on who says it and when, arose from the story of NFL player Colin Kaepernick kneeling during the national anthem.25 Based on that fact, a provocative Russian tweet on March 13, 2018, from an account named @wokeluisa spoke in support of Kaepernick, while another Russian right-wing account, @BarbaraForTrump, was simultaneously tweeting content hostile to NFL players’ protests, both aimed at their respective ideological camps.26 The pro-Kaepernick tweet prompted 37,000 presumably unwitting retweets.27 It began in traditional media as mal-information: actual facts in popular discussion that other parties used to tarnish the player’s reputation. The Russian tweeters then took those facts and made them into disinformation, twisting the truth with the goal of spreading civic discord. Then, at least some of the people who retweeted those messages, oblivious to the source, spread what would properly be called misinformation (information the actor does not know is false).

This Article focuses on disinformation, and in fact, a narrower subset thereof: disinformation arising from a foreign source. To state it bluntly, foreign disinformation is speech used as a strategic weapon. The foreign source is uniquely worrying because that other nation’s pursuit of a strategic end, while hiding its role, is effectively active espionage. Even if international law does not specifically forbid disinformation, like unpredicted acts of war,28 such efforts transgress international norms and should evoke moral abhorrence.29 Narrowing the focus to foreign disinformation is not intended to discount domestic actors.30 Indeed, this Article’s proposed statute would impose liability on domestic actors for spreading foreign disinformation. A domestic actor who furthers the foreign state’s aims by recklessly amplifying its message is facilitating the attack and shares culpability.31 Weaponized speech to achieve strategic goals turns America’s core liberties against it.

With these distinctions, a full definition of foreign disinformation requires four prongs:

(1) the content’s foreign source;

(2) the content’s factual mistruth;

(3) the originator’s malicious intent and strategic aim to undermine civic well-being; and

(4) the content’s release into the public sphere causes harm.32

A final consideration in accurately identifying disinformation is recognizing that under this definition, its manifestations can look very different depending on two variables: the actor (the entity, proximately speaking) and the target (the ultimate audience). The many combinations of actor and target each prompt different considerations in order to combat disinformation. In justifying the proposed statute and explaining why other solutions stumble, this Article will refer to different pairings between these two categories:

Actors: (1) state agents; (2) state subsidiaries; (3) knowing domestic amplifiers; and (4) unwitting accomplices.

Targets: (1) government leaders; (2) media companies and online influencers; (3) corporations; and (4) the public.33

With the boundaries drawn, the next Section will discuss actual occurrences and harm from foreign disinformation and show why today’s threat is worse.

B. The Harm: Its Evolution and Material Effect

Consider how states have used disinformation before. It has deep historical roots,34 and its face is everchanging. Its impact is often nominal or abstract, but sometimes its consequences can be striking. Looking at disinformation in practice not only shows why a response is more important now, but it shows how tangibly words can hurt.

States have used disinformation as a weapon since the beginning of armed conflict.35 The disinformation definition here encompasses foreign espionage campaigns stretching back to ancient Rome—used often in combination with other spy craft to mislead enemies as to Roman troop movements.36 More advanced techniques appeared in the Great Powers era. In early 1918, during WWI, the Allied nations flagged against their entrenched German enemy.37 The Russians, opposed to the Germans, broke into civil war, pitting the Allied-aligned provisional government against the unaligned Bolsheviks. The western Allies had not yet taken a side in the Russian conflict. To help survive their civil war, the provisional government hatched a plot. In cahoots with an American journalist, Edgar Sisson, they leaked a story, which U.S. newspapers eagerly repeated, that the Bolsheviks were secretly negotiating an alliance with Germany, causing the American public to turn against the Bolsheviks overnight.38 Years later, researchers revealed that all of the documents on which Sisson relied were false.39 From its earliest days, Russian operatives successfully manipulated the American public.40 Russia continued to use disinformation as a strategic weapon throughout the Cold War.41

America is no less complicit, and in terms of addressing today’s problem, it is essential to recognize its disinformation history and motives. The same instance described above of Russian disinformation in WWI in fact began when Sisson, the reporter, traveled to Russia at the behest of America’s de facto propaganda ministry, the Committee on Public Information.42 While the agency’s mandate was mainly to drum up domestic support for involvement in the war, its Foreign Section was responsible for spreading both overt and unofficial pro-American messages around the world.43 American disinformation took on a far more aggressive and duplicitous form as well. From 1963 to 1973, the CIA led a disinformation campaign in Chile that ultimately swung popular sentiment against the left-leaning government and led to Augusto Pinochet’s rise to power.44 In Nicaragua in the 1980s, CIA operatives planted enough false stories against the Sandinista regime that papers ran seventy to eighty of them a day, ultimately undermining the Sandinistas as well.45 One count of American-led disinformation efforts in elections abroad cited eighty-one occurrences between 1946 and 2000.46 America has long used disinformation to real-world effect, and even though its methods and goals have evolved into a less malicious form,47 American leaders have a vested interest in keeping it in the arsenal.48 Thus, in combatting disinformation, policymakers must consider the blowback against American interests.

Today, the threat from foreign disinformation has transformed, grown rife, and is harming Americans more than ever. The stability of U.S. elections is not the only concern, but it is an overriding one. Disinformation burst into the public’s attention in 2016, when authorities began uncovering a far-reaching plot by Russia’s Internet Research Agency (IRA) to divide society and potentially alter the election.49 Regardless of whether Donald Trump’s narrow electoral college win was attributable to Russian efforts,50 the breadth of those efforts was clear. The agency developed a network of imposter media accounts publishing inflammatory messages to the American electorate on topics including race, immigration, and public safety.51 A later Senate Intelligence Committee investigation revealed that the 2016 effort was disproportionately focused on stoking dissent and repressing voter turnout among African Americans.52 Foreign hackers have hijacked established publications across the media spectrum.53 News analyses revealed that readers began to repeat and amplify those messages.54 The impact extended beyond online invective and riled emotions; it spilled physically out onto the streets.

In one well-reported incident, the IRA used two fake online groups—one anti- and the other pro-Islamist—to promote a nonexistent rally in Houston.55 Unwitting supporters and opponents spread the news, and dozens of riled protestors appeared at the designated time, effectively on Russia’s marching orders.56 On another occasion, Trump supporters in Florida featured two Russian-troll-backed events on their election websites.57 Facebook identified at least 130 events promoted on its platform tied to IRA activity.58 Robert Mueller’s investigators uncovered one incredible example in which IRA operatives, working entirely online, hired individuals in the United States to build a cage on a flatbed truck and a costumed actress to portray Hillary Clinton in a prison outfit while the truck drove through a Florida rally.59

The threat is not limited to elections, nor does it exist on only digital platforms. The IRA’s effort to stoke dissent continued after the election and by many accounts grew.60 The Washington Post reported in September 2019 that a North Macedonian troll farm took over multiple accounts and spread right-wing messages unassociated with any election—mostly with the account owner’s awareness and an absence of intervention by the social media platforms.61 Disinformation can affect national policy and transnational business interests.62 Chinese nationals have actively penetrated U.S. communities to rewrite the narrative on their nation’s conduct.63 Online publication The Intercept revealed a series of paid advertisements that Twitter allowed on its platform from June through August 2019 that mischaracterized and positively reframed China’s treatment of its Uighur population in Xinjiang.64 The ad campaign, it turned out, was from the Global Times, a Chinese state-owned media organization.65 More overtly—with albeit faint reference to the fact that it was a paid advertisement—the Des Moines Register ran a four-page spread of articles in September 2018 that touted free trade’s benefits for U.S. farmers, the risks of China-U.S. trade tensions, and President Xi’s long ties to Iowa.66 Other adversarial states—Iran and North Korea—have launched similar disinformation attacks against regional enemies and the United States.67 And demonstrating their tactic of piggybacking on breaking news, in just the first three days of the George Floyd police-killing protests in late May 2020, Chinese ambassadors, Russian-sponsored news outlets, and other foreign-controlled accounts tweeted provocative antigovernment messages more than 1,200 times.68 When the Covid-19 pandemic erupted, Russian operatives launched a disinformation barrage, aiming to undercut the World Health Organization and spread messages about American and European mismanagement, all while praising the performance of China and Iran.69 In a dynamic turn, China then symbiotically amplified Russian efforts; Chinese diplomats aggressively reposted Russian disinformation on their Twitter accounts, and Chinese news agencies spread it further.70

Recently, weaponized disinformation has moved beyond the sole control of state actors or cybercriminals. Unlike the fly-by-night North Macedonian troll farm that peddled general right-wing propaganda during the 2020 election, a global wave of public relations firms has begun to offer professionalized disinformation for hire.71 These firms take disinformation techniques that once only intelligence agencies could deploy and offer them to a variety of governmental and private clients, coating those services with a gloss of professionalism—even sinister respectability.72 The menu of services caters to nearly any purpose a client might desire.73 Journalists have found PR firms offering to engage in high-tech corporate espionage and market influence, promoting a given company, and tarring the reputation of competitors.74 Clients have also unleashed these firms’ services against political opponents, both abroad75 and in the United States.76

National governments have also gotten in on the outsourcing act. Just like Russia has given an effective free pass to criminal syndicates to engage in cybercrimes as a means of outsourcing these destabilizing campaigns,77 governments, including the United States, have enabled disinformation for hire groups both indirectly and directly.78 In one such case, the Pentagon hired a British PR firm to conduct both mainstream political media campaigns and secretive psyops in Iraq for a reported $540 million.79 Such outsourcing allows governments to secure the benefits of disinformation warfare—riling an enemy’s internal polity or weakening other nations’ political leaders—with lower costs and greater deniability.80

Disinformation can impose immediate and sometimes life-and-death costs. In Syria, a combined Syrian-Russian disinformation campaign sought to undermine public support for the local humanitarian group, the White Helmets, because their efforts had uncovered evidence of Syrian and Russian human rights abuses.81 The campaign portrayed the group “as terrorists,” using chemical weapons on civilians and threating civilians as “legitimate targets.”82 The group’s public backing waned and, as The Washington Post described, White Helmet personnel were arrested and tortured. “This isn’t just buzz on the Internet,” one stated. “We’re dying for this.”83

While on one hand the enormous durability and reach of these campaigns is alarming, those same factors plausibly imply that the status quo is more or less okay. That is, if states have effectively matched each other in their efforts and capabilities going back to the Roman era, existing solutions are capably addressing the threat and the costs have already been accepted. But there are differences today, which will grow with time, that should greatly dissuade inaction.

In one recent consideration of the problem, scholar Nabiha Syed identified five issues in contemporary geostate relations—each aggravated by pervasive internet connectedness—that leave America uniquely vulnerable to disinformation’s effect.84 First, manual and algorithmic filters curate the information to which a person exposes herself.85 Americans are increasingly likely to hear only a single, slanted view of the world.86 Second, localized communities creating their own online content further limits the inflow of facts.87 As Syed describes it, “[E]ven though a television anchor might present you with a visual of Obama’s American birth certificate, your online community—composed of members you trust—can present to you alternative and potentially more persuasive perspectives on that certificate.”88 Third, amplification: truly anyone can spread harmful messages to millions with the push of a button.89 Fourth, and adding to that ease: exponential acceleration using AI and social media bots.90 One more example of speed-enhancing technology, which Syed does not mention, is the threat of deepfakes. Even though Russian operatives were half-convincingly manipulating photos with scissors and tape back in the 1930s,91 contemporary digital technology can make fake content appear real to all but the best-equipped experts.92 And fifth, profit incentives deter capable actors from working to limit the problem. Syed suggests that “[s]ocial media platforms make fake news uniquely lucrative.”93

To these five factors, one more development makes disinformation more dangerous now: asymmetric warfare.94 This concept implies that when a nation confronts a significantly stronger military power, the weaker nation’s strengths will come from cheap and repeatable techniques whose damage is asymmetric to the cost.95 In this sense, disinformation is the proverbial slingshot to the American giant, where such nations would have little or no chance on a physical battlefield.96 Because America, more than any other nation, is dedicated to free speech, and many institutional protections support that right, using speech as the primary attack vector makes it exceedingly difficult for Americans to defend against it.

Not only has foreign disinformation grown as a threat, but American disinformation abroad has turned outright peaceful. Legal scholars Josh Geltzer and Jake Sullivan have valuably codified what makes Russia or China’s contemporary disinformation campaigns wildly more abhorrent than any current American efforts.97 They start from the premise that U.S. information campaigns, at least since the Cold War, might aim at supporting a particular electoral outcome, but always operate on the principles of transparency and stronger democracy.98 In contrast, recent foreign campaigns: first, intend to divide communities and destroy common meanings of truth; second, utilize criminal methods like hacking and defamation; third, promote preferred candidates, like U.S. efforts; but fourth, seek to conceal their foreign role and turn Americans on each other; and fifth, ultimately unravel the democratic process, rather than deepen it.99 This sea change in technology and global strategy has transformed an ancient battlefield technique into a contemporary, potentially existential threat to democratic society.

To summarize, though the term foreign disinformation is a broad concept, it has identifiable boundaries and a succinct definition. It poses a threat that is not new but escalating rapidly. While the factors in present-day information exchange described above might not be the only changes feeding the threat, they alone are enough to capture why new defenses are needed. Foreign disinformation, properly understood, can be deterred.

II. Already-Proposed Solutions

This Part surveys already-proposed solutions and the challenges on which those solutions have stumbled. It groups them according to the four potential actors: (1) state agents; (2) state subsidiaries; (3) knowing domestic amplifiers; and (4) unwitting accomplices. This taxonomy is helpful because the most effective means of combatting disinformation varies depending on who is proximately disseminating that mistruth. With this survey of potential solutions spelled out (and indeed, it is only a sampling), the remainder of this Part explores five common challenges holding the solutions back: (1) defining policy goals; (2) which party is best situated to address those goals; (3) First Amendment concerns; (4) enforceability; and (5) the risk of the solution being used against legitimate players. These challenges affect each of the proposed solutions to some extent, and therefore, the existing menu is at best incomplete.

A. The Actors

State agents. State agents includes foreign leaders themselves, government ministries, or military detachments.100 In practice, this might be the Iranian President or an elite Chinese army cyber unit, like the so-called Unit 61398.101 These actors have unique immunity protections from U.S. law that can deter legal redress, whether civil or criminal.102 Government officials are protected by diplomatic immunity in the case of foreign individuals,103 or foreign sovereign immunity in the case of foreign governments.104 Militaries are protected from legal process by customary international and treaty law, as well as status of forces agreements.105 Of course, no state may deploy its armed forces into another state without a valid legal justification, and this includes cyber operations.106 But in practice, military disagreements are often addressed on the battlefield or in diplomatic chambers.

Most groups forging responses to disinformation attacks by any type of state agent have focused on foreign policy solutions undertaken by the federal government, such as sanctions, democracy promotion, and bilateral agreements.107 The U.S. government took this approach against Russian disinformation with economic countermeasures like the 2017 Countering America’s Adversaries Through Sanctions Act (CAATSA).108 At the same time, the U.S. military and intelligence agencies also seek to prevent disinformation attacks by attacking first.109 As already noted, America has a history of active disinformation campaigns,110 and in recent years, has advanced its kinetic offensive cyber capabilities.111

State subsidiaries. This category includes publicly acknowledged, state-owned agencies like Russia’s RT media group and Sputnik112 or China’s China Central Television (CCTV),113 as well as unacknowledged groups acting independently of a nation’s organized intelligence services.114 A key example of the latter was the key driver of the Russian 2016 U.S. election attack, the IRA, which Russian oligarch Yevgeny Viktorovich Prigozhin nominally owned and operated as a private holding.115 State subsidiaries can also be quite autonomous, as with private firms that state actors hire.116 The shadowy Archimedes Group is one such firm; based in Israel, it is reported to have conducted disinformation campaigns on behalf of unknown actors in Africa, Latin America, and Southeast Asia.117 This category of actors rarely benefits from the same immunity protections as officials operating as a state agent, and as such, court processes are an option.118 Individuals in the group that Prigozhin operated were among the defendants whom the Mueller team indicted—on charges of bank fraud, wire fraud, and conspiracy.119

Solutions that might also be used against state agents could deter this category. For example, as an early response to the Russian election interference, the Obama administration expelled from the United States thirty-five Russian nationals associated with Kremlin spy agencies.120 Technological defenses also reemerge. Commentators like White House national security coordinator Richard Clarke have proposed a cyberwar fortification response, hardening domestic infrastructure and, if necessary, launching online countermeasures to directed attacks.121

Domestic Amplifiers. Amplifiers transform otherwise isolated disinformation into mass media. They are often a key vehicle turning a story viral. They include social media platforms, so-called influencers122 (real and fake), bots or other automation and augmentation technologies, domestic news media, or independent news-like websites.123 Here, potential solutions are weaker. The number of actors is far greater and diverse, amplifiers sit smack in the middle of free speech and free press protections, and, because they are private actors, compelling action or inaction might be impossible.

Here, analysts and leaders have proposed or enacted a wide range of legislative remedies, some that promote transparency124 and others that restrict threatening speech.125 An example of the latter is a recent California law that makes it a crime to distribute deepfakes intended to manipulate an election,126 and another California law creating a private right to sue entities that produce pornographic deepfakes using the victim’s identity.127 Activist groups of different ideological shades have turned their spotlight on media companies themselves, urging corporate changes through consumer pressure.128 Here, when restricting threatening speech by domestic entities, First Amendment concerns reach their apex.

This is also the category where technological and internal-corporate solutions are most prominent. In dealing with bots and deepfakes, which many social media platforms prohibit by their own bylaws,129 private companies and public-private partnerships have developed solutions like algorithms designed to detect and flag (or block) automated and altered content.130

Social media platforms have also announced policy changes independent of, or at least in response to, market pressure. Twitter in Fall 2019 asserted it would block political advertisements for at least the next year.131 Facebook established a new “oversight board,” consisting of up to forty paid, part-time members who “will adjudicate controversies arising from Facebook’s in-house efforts to enforce its standards on hate speech, misinformation[,] and other prohibited content.”132 Without yet exploring the reasons why, it bears mention that commentators and interest groups have already attacked these internal measures as half-hearted, unmanageable, or pure window-dressing.133 This is not to say that amplifiers themselves should not play a role. For firms like Facebook that host disinformation, it is impossible to imagine a solution in which they are not involved.134 But there are many perceived shortcomings in their attempts, which means at least for now, their capacity and value in countering the threat is unknown.

Unwitting accomplices. This final category includes the recipients of information: the consumer public, individual decisionmakers, or information-consuming agencies.135 The distinction between them and amplifiers is that accomplices have no intention of promoting disinformation or responsibility for others who may. Accomplices might be anyone who takes in disinformation and, if they share it, do so unaware of its mistruth, or they do so to call attention to its purpose.136 It also includes elected leaders or government agencies when consuming rather than making the news. Here, the effects are most diffuse and so are the potential solutions.

Common proposals touch on better education and online security (the latter because disinformation often comes from hijacked accounts or is microtargeted based on hacked information).137 Transparency is a recurring theme.138 Many groups have called for greater consumer data privacy to prevent nefarious microtargeting and manipulation of users.139

B. The Challenges

With the four types of actors spelled out, one can more readily identify the challenges. In distinguishing the actors from each other, even if there is overlap, each category has unique vulnerabilities and capabilities. As noted, some solutions are appropriate for only some groups, while other solutions are universal. Relevant to this analysis, legal processes have been notably absent from discussion. In considering why a new legal response is needed, the first step is to identify where the existing solutions fall short. There are at least five common challenges that any disinformation solution will confront.

First, the challenge of defining the underlying policy goals. This challenge becomes clear when considering the variety of actors. There are so many potential sources of disinformation and so many ill effects that it is hard to imagine one technique fixing the disinformation problem writ large. What threat then should the solution redress? Limiting foreign intelligence attacks during elections? The looming issue of AI, bots, and deepfakes? Or, is it the underlying lack of civic education and media literacy? Advocates can justify pursuing each of these goals, but limited resources and appetite require that policymakers address where the nation will get the most bang for the buck. For instance, solutions aimed at elections might overestimate the cost of electoral interference,140 in which case, sweeping efforts to curtail political speech141 might weaken the capacity to address the costlier threat of disinformation stoking racial conflict.142 Or instead, a narrow focus like limiting disinformation on social media might be ineffective or counterproductive. For example, an earlier Facebook tool to label false information as “disputed” perversely caused more sharing of the flagged content and thus increased public interest.143 It ignored the bigger challenge of how to reduce disinformation’s resonance and reach. Moreover, a misguided policy response into sensitive terrain, like free speech and elections, might further disrupt the public’s trust in institutions and further fuel the threat.

Second, who are the parties best situated to combat the specified goals? Dividing the solutions into categories of actors reveals many leverage points at which to respond, but only certain entities are suited for some of those efforts. For instance, only government can lead a cyberwar counteroffensive or levy economic sanctions. Private companies are in control of regulating information online unless governments compel them.144 Likewise, the media bears a burden to identify disinformation channeled through it or spot other parties who spread it.145 Civic institutions and consumers play a large role in public education and resilience.146

Here, a grave challenge manifests in that each group might have perverse incentives not to take corrective action. Government might hesitate because it uses similar techniques.147 For the U.S. military, it has to consider the cost of lowering the normative bar to counterattacks.148 It also goes without saying that some individuals in government will benefit from disinformation campaigns and they might have political reasons to deter enforcement. Online platforms and the news media often profit on disinformation.149 And the public reads disinformation because they like it; the stories stimulate short-term interests and subconscious beliefs.150 A RAND study applying workgrouping methods to these challenges noted that most consumer educational initiatives failed based on lack of ability or motivation.151 It puts the burden of training on the consumer without granting them additional time or incentives.152 Where it comes to choosing who is best equipped to tackle these problems, perverse incentives becomes an essential test.

Third is the issue of how to address America’s First Amendment protections. Without attempting to summarize all relevant First Amendment doctrine,153 suffice it to say that most but not all forms of speech are protected.154 Though private communications platforms operate largely outside the First Amendment’s reach, if the government imposes restrictions, it risks overreach and either reversal in court, or alarmingly, no pushback at all and a shift in norms.155

Avoiding overreach is possible in one of two ways. First, by limiting new efforts to stay within an existing First Amendment exemption. Despite the Constitution’s broad free speech protections, for decades, courts have accepted restrictions on various types of “unwanted speech.”156 The most fitting exemption for disinformation might be “fighting words,” which the Court has described as “those which by their very utterance inflict injury or tend to incite an immediate breach of the peace.”157 To construe disinformation in such terms requires focusing on its propensity to disrupt civil society and fuel social conflict. Russian agents in the 2016 attack indisputably intended as much.158 But the conflict and incitement the Russians attempted is a different breed from what the Court envisioned in Chaplinsky v. New Hampshire. Describing the fighting words exception in Brandenburg v. Ohio, the Court said advocating the use of force or criminal conduct is protected unless “such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action.”159 There, the Court struck down a law that Massachusetts had used to prosecute Ku Klux Klan members who were participating in a cross-burning rally because the likelihood of violence was too indirect; one member in that rally had threatened the possibility of “revengeance taken” on the President, Congress, and Supreme Court if they continued “to suppress the white, Caucasian race.”160 This core interpretation of the fighting words exemption would need to be radically reconceived in the disinformation context to encompass disinformation’s more diffuse effects. Alternative First Amendment exceptions, like defamation, might also encompass foreign disinformation, but even then, it could be an awkward fit.161 That said, the Court has allowed expansion of these exceptions over time and a disinformation solution would likely need to do just that—push the constitutional boundaries further—but policymakers should attempt such expansion with caution because of the many types of acceptable and essential speech it might swallow.

A second way to limit overreach is to sidestep First Amendment concerns by targeting only unprotected speakers. But this is a narrow subset of actors. Free speech protection covers U.S. citizens, whether located on U.S. soil or abroad,162 and permanent lawful residents residing in the United States.163 This approach to avoid a First Amendment conflict could be applied only against state agents and state subsidiaries—that is, noncitizens. This would omit many domestic amplifiers—they often are citizens.164 As nations take a more proactive stance against disinformation, there has already been a trend by foreign actors to encourage partners inside the targeted nations to create and spread disinformation themselves.165 Evidence also exists of “[h]omespun operations on social media” in America, by Americans.166 Foreign states will likely recruit local actors in America because local voices are far more influential in their communities.

Though neither of these existing routes to avoid First Amendment conflict will allow an easy solution, that does not mean the First Amendment is an unbreachable barrier. In other contexts such as terrorism, lawmakers and courts have found a narrow path through these constraints to fashion statutes that control unacceptable conduct without unfairly limiting constitutional rights.167

The fourth challenge is enforceability, which encompasses a whole slate of legal and practical issues. In addition to First Amendment protections, there are statutory protections for many online platforms or news agencies, undermining the capacity to regulate them.168 This includes not only online platforms’ immunity from private suits, but essential press protections like whistleblower laws and state-law reporter shield privileges.169 Another enforceability problem is whom to hold liable and how. How will the original sources of disinformation be identified? In retrospect, it is easy to point at the actors in the 2016 Russian disinformation operation, but that took separate investigations by the CIA, Senate, and a special prosecutor to achieve.170 Outside groups and media platforms have launched their own analyses recently, but it could be difficult for a lone victim to obtain the necessary evidence to identify, let alone convict, the culprit.171 The introduction of private firms acting on behalf of state actors does not prevent fixing blame, but it heightens the burden of proving a state’s culpability, requiring the plaintiff to identify another link in the chain. Separately, in terms of fixing blame, it seems fair that unwitting consumers should not be held liable for spreading content, but if authorities are to target amplifiers, policymakers must determine what mens rea is required to separate those categories. Lastly, the issue of remedy. When the target is a foreign actor in noncooperative nations, authorities may be unable or uninterested in enforcing verdicts.172 Is it sufficient to have a court verdict holding a foreign actor responsible, even if the remedy is never fulfilled?

The fifth and final challenge is that the solutions themselves could be weaponized and turned back against legitimate players.173 Asymmetric warfare necessarily looks for a nation’s greatest vulnerabilities, and those are often the areas that the target nation has an interest in leaving free. Using speech against America is the perfect example of turning a shield into a sword. America’s enemies can turn the First Amendment against it, not only by defending their attacks under its provisions (for example, recruiting domestic partners), but by forcing an overzealous response.174 Any restriction on harmful speech necessarily risks weakening rights for similar types of speech. Policymakers and society broadly must agree where to draw the line, balancing national security and freedom. Even efforts to increase transparency can result in blowback by authoritarian regimes. When American authorities sought to counter Russian election interference by forcing media groups RT and Sputnik to register under the Foreign Agents Registration Act (FARA), Russian authorities in turn forced Voice of America, Radio Free Europe/Radio Liberty, and seven other U.S.-backed outlets to register as foreign agents, which in turn “had a powerful chilling effect on Russian outlets.”175 “[T]he cost of continuing to call out Russian actors as foreign agents may well be the demise of the dissemination of independent media voices within Russia.”176 Many of the currently proposed remedies are prone to misuse.177

In considering the proposed solutions and their challenges, two points bear reinforcing: First, useful solutions will remain conscious of which actors they are targeting. In the end, all actors must be addressed. Second, challenges arising from imprecise policy goals, enforceability, and First Amendment protections apply in each instance. The right solutions will be prepared to wisely tackle them all.

III. A New Private Right of Action

With the terrain defined, this Part, in concise terms, constructs the statute. First, this Part lists the statute’s elements; second, it defines the precise goal. Subsequently, in Part IV, the Article digs deeper into the elements and shows how they enable the statute, unlike already-proposed solutions, to succeed.

A. A Law that Balances Security and Freedom

Solving the online disinformation problem should provoke gut-level discomfort. At its heart, the goal is to limit one of democracy’s great freedoms—that of expressing and consuming ideas. From its birth, the internet was celebrated as an engine of free expression178 and a source of global liberalization.179 The fact that the internet has veered from these ideals does nothing to lessen the essentiality of those values. In proposing any means to combat online disinformation, wise policymakers must operate with these goals intact. In some ways, that means imperfect solutions. Better to let some harmful speech through than to over-restrict. A solution that protects against all harmful speech would become a First Amendment cancer. The type of speech that people need protection from is not that which is upsetting or untrue, but instead, lies that are strategically milled to harm.

The statute proposed here is aimed squarely at weaponized speech. As Justice Holmes once wrote, “[I]f there is any principle of the Constitution that more imperatively calls for attachment [to the Constitution] . . . it is the principle of free thought—not free thought for those who agree with us but freedom for the thought that we hate.”180 And to state it crudely, sticks and stones may break one’s bones, but most words can never hurt. The anti-disinformation law we need would target only those words that truly injure.

This proposed statute aims to fill a narrow but important gap. Namely, it would thwart disinformation when the actor maliciously causes demonstrable harm. It does this by achieving another goal that already-proposed solutions largely miss: empowering the victim. Whereas many approaches try to help consumers avoid disinformation, this statute seeks to make the victims whole.181 Whereas others leave in place perverse incentives—pecuniary, political, and psychological—this solution encourages parties to act. Though many solutions target only one actor, this proposal can be used against all parties. Finally, this solution can attack a wide variety of disinformation, whether meant to alter an election, sabotage industries, or simply sow dissent.

The Proposed Statute

This anti-disinformation law grants individuals who have suffered harm a private right of action for money damages against other parties—domestic or foreign; and state, corporate, or private—who by reckless neglect or intentional malice, spread false information that is fairly traceable to a foreign disinformation program.182

The statute has the following specific provisions:

Culpable foreign disinformation is defined by: (1) a foreign source; (2) factual mistruth; (3) the originator’s malicious intent and strategic aim to undermine civic well-being; and (4) resulting harm when the factual mistruth is released into the public sphere.

A court would have jurisdiction over an American person who suffers injury, regardless of where the foreign disinformation is propagated or where the harm occurred, as long as the defendant used a United States–based facility of commerce or communication.

The plaintiff must show injury in fact to obtain standing.

Any mens rea less than recklessness will be insufficient to satisfy guilt.

The plaintiff must show that the defendant’s actions were a substantial factor in causing her harm. Pure comparative negligence applies. If the plaintiff or other parties were contributorily negligent, the factfinder should apportion fault accordingly.

If the plaintiff successfully proves liability, the court may award up to treble punitive damages, plus compensatory damages.

As an exception to the FSIA, a foreign state’s government, military, intelligence community leaders or members, or supported entities thereof may be held financially liable under this provision.

The United States government may assist in collecting damages by attaching claims under this law to funds seized from the foreign nation or entity (sources from which funds may be attached will be enumerated).

There will be no executive waiver whereby the executive branch may halt proceedings.

B. The Policy Goal

Plainly stated, the specific goal of this law is to reduce the flow of foreign disinformation to American consumers at all times. It is broad in that it encompasses all varieties of actors and targets, but narrowed by requiring a provable, initial foreign source.

At its base, any smart policy goal maximizes objectives relative to resources. The objective must also be in line with the community’s desires. Many proposed anti-disinformation solutions stumble at the policy stage. A key example is consumer education, where the goal—turn the tide of news readers against foreign disinformation by helping them identify mistruths—is misaligned with (or oblivious to) consumer desires and capacities.183 Another example is industry self-regulation, which ignores companies’ financial interest in the status quo.184

This proposed statute requires minimal investment of resources. The costs are borne by the litigants and partially reimbursed through successful suits. Enforcement—seeking attached funds—would sometimes require state assistance and the proceeds would come out of monies that would otherwise go to another recipient, but law enforcement would be doing that forfeiture collection regardless, and ultimately, the cost is paid by the complicit actors.

Empowering the victim has other advantages. Even if civil discovery is not always easy, it will be a new tool to pry into secretive firms or state actors and expose disinformation attacks and their techniques. As for financial awards, while large monetary judgments might do little to deter foreign states, they could limit assistance by oligarchs who are exposed via Western business interests and have an outsized effect on domestic companies or public figures, discouraging reckless behaviors. Furthermore, fighting disinformation is truly an all-hands-on-deck effort. The failed response so far has shown that further, wise contributions to the battle are needed, and this new law will empower a force of “private attorney[s] general” to contribute.185 Most importantly, this solution carries a natural incentive to use it: financial and/or emotional payoff. The reward for fighting disinformation in this approach is not merely deterring future attacks—a goal that is hard to take personal appreciation of—but also making the victim whole if winnings are collected or at least giving the victim the satisfaction of her day in court.186

This law does not purport to be a complete solution. But it will be an important one.

IV. Why This Statute Succeeds

This final Part identifies the six most demanding obstacles the statute might face and shows how its elements account for each. Implementing any solution will mean running a practical and constitutional gauntlet, but from the start, this statute is primed to succeed.

A. Private Right of Action in a Terrorism Context

This statute’s strongest attraction is that the federal government has successfully fielded something very much like it before. In so doing, courts and policymakers have worked out the kinks, which gives this proposal a racing start. An analog for the anti-disinformation law exists in the antiterrorism context: the 1992 Antiterrorism Act, modified in 2016 by the Justice Against Sponsors of Terrorism Act and the Anti-Terrorism Clarification Act of 2018 (hereinafter referred to collectively as the Antiterrorism Act, or ATA).187 Through this legislation, Congress created a private right of action for terrorism victims or their beneficiaries to seek civil remedies against the responsible groups or their state sponsors.188 Just as foreign disinformation victims currently have no means to win recompense, Congress passed the Antiterrorism Act to enable other victims who had no recourse: the family members of Americans killed in the 1985 hijacking of the Achille Lauro cruise ship189 and the Pan Am Flight 103 bombing.190 The bills achieved this by amending the Foreign Sovereign Immunities Act (FSIA), eliminating protection for states or their agents if claimants proved them culpable in a terrorist act.191 Through its many revisions, Congress expanded the Antiterrorism Act’s reach. These combined changes, primarily codified in the criminal code as 18 U.S.C. § 2333, empowered victims and their family members to do in the terrorism context exactly what the proposed disinformation law seeks to do in its context: punish and weaken wrongdoers through civil enforcement and, to some degree, make the victims whole.192

The biggest challenge that Congress dealt with in the Antiterrorism Act was expanding jurisdiction.193 In a 1996 revision, Congress made a private right of action explicit in the text (rather than presuming an implied right under international law), broadened the law’s extraterritorial reach, and opened the possibility of foreign aliens suing in the United States’ courts.194 The 2018 revision expanded jurisdiction further by enabling claims against an actor-defendant if that defendant had taken advantage of the United States’ facilities or material assistance in any way.195 This allowed suits against nonstate actors in instances of so-called secondary liability, when those actors aided and abetted but did not directly participate in attacks (allowing material-support claims against individuals and companies that enabled terrorist acts).196 The proposed anti-disinformation statute takes heed of this slow expansion of jurisdiction by opening the door at the outset to claims by any harmed citizen, regardless of where the harm occurs. And Courts will gain personal jurisdiction over foreign actors whenever they rely on domestic channels of commerce or communication: posting information on American-owned social media firms, using U.S.-based internet servers, or contributing money to amplifiers via American credit card processors, for instance.

Immunity posed a similar jurisdictional barrier for the Antiterrorism Act. Before Congress passed it, FSIA prohibited claims against any foreign leader and many foreign companies and officials.197 The statute overcame the immunity bar by modifying FSIA to include acts of terrorism officially endorsed or enabled by foreign states, referred to as primary responsibility.198 The exception was originally limited to a handful of nations that had been formally designated by the Secretary of State.199 The government gradually expanded that list, and in 2016, Congress entirely removed the requirement that the defendant group be an officially named sponsor.200 The law did not (and still does not) allow victims to sue states for aiding and abetting (only nonstate actors may face secondary liability).201 In other words, the state or state agent must be directly involved in planning or executing the action. The immunity challenge should serve several cautionary notes in drafting the anti-disinformation law, another of which—executive waiver—is discussed in depth below. But as to jurisdiction, while the proposal does not directly address secondary liability, its framers might consider adding a more specific aiding-and-abetting provision that waives immunity for state defendants.

The Antiterrorism Act has also dealt with complex procedural issues, like how to serve process.202 In March 2019, the Supreme Court held that mailing notice directly and expeditiously to a minister of foreign affairs at his ordinary place of business in the foreign state was an appropriate mechanism for suing Sudan for a terrorist attack.203 The rule should stand in the disinformation context as well.

The Antiterrorism Act did not expand discovery procedures beyond the standard rules, and litigation shows that discovery remains a hurdle, especially when it involves suits against state actors.204 Admittedly, when actors cover up their tracks, as with disinformation attacks and in contrast to terrorist attacks (where actors often seek credit), the barrier is even higher. But the antiterrorism legislation has set actual examples that anti-disinformation plaintiffs can follow to overcome this hurdle. Arguably, if a plaintiff has enough resources, there are few statutory bars on discovery in antiterrorism cases; FSIA is not prohibitive. The Supreme Court upheld broad discovery under FSIA in a post-judgment execution action against Argentina in 2014, even over the objection of the U.S. government.205 A factor that might lower the discovery bar in disinformation campaigns is that there are a number of other organizations—government and private—that often publish evidence of these attacks and their participants.206 To the extent that discovery remains a challenge, there are extreme measures that the anti-disinformation framers might consider. For instance, Rep. John Conyers proposed a solution to the discovery problem in a proposed antiterrorism amendment that would have allowed courts to conclude any fact against the defendant if a foreign state thwarted attempts at discovering that fact.207

The antiterrorism legislation worked through the issue of mens rea at the pleading level, establishing the minimum requirements to state a claim. In one recent case under the law, Crosby v. Twitter, Inc., victims of the 2016 Pulse Nightclub shooting sued Twitter for hosting ISIS messages that allegedly radicalized the attacker.208 The Sixth Circuit upheld the district’s court dismissal, ruling that the victims had not made a facial claim that Twitter “knowingly and substantially assist[ed]” the attacker.209 In doing so, the Sixth Circuit affirmed the district court’s six-factor mens rea analysis, which other circuits use as well: “(1) the nature of the act encouraged, (2) the amount of assistance given by defendant, (3) defendant’s presence or absence at the time of the tort, (4) defendant’s relation to the principal, (5) defendant’s state of mind, and (6) the period of defendant’s assistance.”210 In its analysis, the district court held that plaintiffs could point to no facts showing Twitter was aware that ISIS was using it to facilitate attacks and continued to provide ISIS its services.211 This six-step test is directly transferrable to disinformation litigation, for courts to assess the proposed statute’s reckless or knowing standard.

The final issues that the Antiterrorism Act dealt with were the remedy, namely collecting foreign assets if a victim succeeds in her suit, and potential executive waivers. The Act eliminated some FSIA restrictions that would have blocked collection from seized state assets, allowed for successful plaintiffs to file superior liens on a defendant’s holdings, and enabled plaintiffs to attach many types of proceeds gathered by law enforcement to pay those liens.212 For state defendants, the Act expanded the avenues of recovery to include any corporation in which the state or its actors held a majority interest,213 even if the entity was not itself an agency or instrumentality of the state.214 Congress further complemented these techniques with powerful civil enforcement tools under the USA Patriot Act.215 But admittedly, victims found more obstacles than success in asset recovery, and Congress’s multiple attempts to overcome those challenges have floundered in court.216 This primary hurdle has seen pushback from the White House over concerns about encroachment on foreign policy goals. The 2016 revision to the Antiterrorism Act passed, despite Obama’s veto for fear of its effect on foreign diplomacy.217 Congress tried in 2018 to expand the eligible pool of money to which defendants could attach their claims to include funds seized in anti-narcotics enforcement.218 This time Congress cut out the greatest barrier to defendant recovery, an executive waiver, but immediately felt pressure to put that waiver power back in place for fear that plaintiffs will use the expanded jurisdiction and recovery capabilities against friendly states, or that other states might respond—returning to FSIA concerns—by weakening American immunity.219

The Antiterrorism Act’s executive waiver fight is a useful guide for the disinformation statute, and the White House’s pushback over encroachment carries an important lesson: There will be occasions when personal and governmental interests conflict. Concern about foreign policy repercussions cannot be overstated. This law’s framers would certainly not take the threat lightly and that pushback deserves consideration here.

America’s standing is largely defined by its global interconnectedness. It does business in nearly every country, sends troops and diplomats abroad, builds infrastructure on which other nations rely, and in turn depends on multilateral institutions to responsibly settle disputes.220 As one prominent scholar on the subject said: “The United States has more to lose than any other country by removing the shield of foreign sovereign immunity . . . .”221 Critics argue that by allowing suits against foreign actors in U.S. courts, competitor and even friendly nations will respond with a tidal wave of challenges to American conduct in their courts.222 Instead of resolving disputes diplomatically or through suits brought in the United States, American businessmen engaged in trade could be arrested the moment they exit a plane in any nation with a standing warrant.223

This recrimination issue garnered renewed attention during the Covid-19 pandemic, when over a dozen parties, including the states of Missouri and Mississippi, sued China’s government and communist party for their putative roles in the disease’s rise.224 These claims arose primarily under tort law and were therefore subject to FSIA.225 To overcome an immunity challenge, they relied on an FSIA exception that nominally waives immunity for foreign commercial activity (if it has a direct effect in the United States).226 But as practitioners made clear, that exception likely did not apply to the actions in question and thus the claims were bound to fail at immunity’s gate.227 Tapping the public mood, several congressmen in turn proposed new legislation modeled on the Antiterrorism Act, aiming to do exactly what the anti-disinformation law would: create a new FSIA exemption for a specific purpose (pandemic suits).228 In turn, critics warned of in-kind reprisals by other states and likely violations of international law if U.S. cases proceed.229

These attacks should be expected again in the disinformation context, but for at least three reasons, fewer punches will land. First, it is true that unlike terrorism (and more like Covid-related, public health matters),230 the United States will be more prone to suits in reprisal. Although America does not engage in anything like traditional terrorism, U.S. intelligence agencies do use implicit and explicit disinformation-style campaigns abroad,231 and more worryingly, a claim could be made against entirely truthful information sources like the Voice of America.232 But there is also a reason not to worry; America’s disinformation-style efforts are different. As mentioned, at least since the Cold War, America has operated with striking transparency in its electoral and social influence campaigns.233 American information efforts bend toward supporting global democracy, while competitor states aim to undermine it. While the distinctions might matter less to the competitor states, they do make a difference under international law, where Russia’s and China’s disinformation attacks violate national sovereignty and non-interference norms, but American efforts broadly comport.234 America’s allies are less likely to express concern for this reason.

Second, unlike with Covid-19 where many of the suits relied on theories of negligence by the Chinese government,235 the concern in both the terrorism and disinformation contexts is whether the acts are intentional.236 Because the disinformation law requires an initial, aggressive act by a state actor, it would never allow liability against a foreign government for negligent misstatements. Likewise, foreign governments would be unjustified in criminalizing the sort of dunderheaded but not intentionally harmful statements that U.S. authorities occasionally let loose.

Third, whereas Covid-19 presented worrying scenarios of U.S. litigants attempting to use their claims for extensive legal discovery against China, the fact-finding demand would be lesser against foreign states in dealing with disinformation, reducing backlash on this front.237 History has already shown that a surprising amount of evidence has emerged without foreign cooperation‚ through congressional investigations or public reporting. Moreover, unlike Covid-19 where most of the documentary proof was kept in government annals, reliance on domestic channels for disinformation means that significant proof will be available on U.S. soil.

For each of these reasons, combined with the important point that this law is designed to be used against entirely domestic actors that amplify foreign disinformation more often than states themselves, the actual threat of reprisal by other nations is likely slight.

Despite these distinctions, just as the ATA’s framers wrestled with whether to include an executive waiver, there will still be pressure to do so for disinformation. It would be one functional mechanism to allow at least some suits through, while enabling the White House to screen out ones posing foreign repercussions. Two unpleasant factors make granting a waiver unacceptable. First is the similarity in America’s influence campaigns abroad with the techniques that foreign nations use in disinformation attacks. Even if these programs can be distinguished in the careful language of diplomatic halls and legal briefs, a White House worried about optics over substance might simply quash suits that look bad to foreign partners. Second, a policymaker might have benefitted from a disinformation attack and would be perversely incentivized to block a suit. It suffices to say of the 2016 election and Trump’s controversial relationship with Russian President Putin that executive waiver is a power some White Houses might abuse.238 The Trump administration was bound to be uncooperative at best. Both factors combined suggest that any President would attempt a waiver more often in the disinformation context, which would neuter the statute’s effectiveness. Congress has excised the waiver from the Antiterrorism Act. The disinformation statute should never have one.239

Recovery through antiterrorism suits has resulted in many failed cases, but plaintiffs have landed important victories as well, and Congress has spent three decades reworking its provisions to ensure even more success. The current framework offers important lessons in the disinformation context: it illustrates the obstacles disinformation framers will face and potential workarounds, and it provides future plaintiffs a map for litigation and solace that there is light at the end of the tunnel.

B. Defining Disinformation

Although public figures apply the term “fake news” with many meanings, this statute’s provisions define disinformation clearly. Here, it is simply a matter of returning to the definition offered in Part I.240 Foreign disinformation has: (1) a foreign source; (2) factual mistruth; (3) a detrimental intent and strategic goals associated with the content’s generation and initial sharing; and (4) resulting harm when the mistruth is released into the public sphere.241

By adopting this definition, the law enables litigants to identify what types of harmful information are targetable. This is not to say that the litigant will easily prove each prong, but the courts will have a fully deployable standard to determine whether the plaintiff’s prima facie case is met. For instance, a foreign effort using false information to spark a heated political rally could be actionable. If it comes from abroad, plaintiffs could show it was based on provable mistruth,242 the goal was to broadly inflame social tensions, and the rally—if it occurred—would likely have inflicted measurable harm. Compare this to a similar act that would not be covered: a local community member who rallies others to engage in a politically contentious march, but whose political message is untainted by foreign influence, and he has had no contact with foreign-linked actors. Here, the message comes from within the community, under First Amendment protections. Even if the organizer stated mistruths and intended to cause social upheaval, this law would not touch him.243 These examples show that what appears to be an impossible-to-define category at first, when carefully applying the definition’s four prongs, is fully describable.

C. Appropriate Mens Rea

The statute’s required mens rea is that the defendant acted knowingly or recklessly—no less. Like the standing principle, this test will likely narrow the law’s use and direct suits toward the most egregious culprits: presumably foreign entities who create the disinformation or large media and online institutions that have the resources to know the source of their content.244 The knowing or reckless standard is separate from the underlying requirement that foreign entities created or spread the disinformation with malicious intent and strategic aim. Thus, the mens rea is higher for targeting state actors because disinformation’s very definition means showing that the state actors knew they were engaged in disinformation. The lower bar would also be hard to meet for, say, an outspoken American who retweets dozens of foreign disinformation memes.245 The plaintiffs would have to show that the retweeter was exposed to enough readily available reporting on the mistruth and foreign origins of those claims that he acted recklessly. As such, a person exercising reasonable care could not run afoul of the law through inattention.

Further assuring that this law is not abused, the plaintiff would have to prove the defendant’s sufficient intent on all of the law’s first three definitional prongs (the fourth prong, causing harm, would not be relevant at the intent phase). For example, in a suit against the serial retweeter, the plaintiff would have to show that the defendant knew or should have known that she was repeating false information (prong two), it originated from a foreign source (prong one), and that the foreign source intended strategic disruption (prong three).

A related, practical point is that a defined mens rea might result in a varying actus reus for future cases, as education about disinformation shifts. If technology enables automatic warning flags on dubious memes or questionable sources,246 for instance, that which constitutes knowing amplification might change based on whether such a warning was issued and received. This law could naturally conform to society’s changing norms.

There is a valid argument that far from reasonably limiting abuse, this burden actually makes pleading impossible—that it would require a smoking gun showing that the defendant considered that it was disinformation and acted anyway. If the framers agree with this concern, though it seems a misreading of the law, they might choose an alternative mens rea that borrows from defamation. Traditional defamation has a different intent standard depending on whether the defamatory statement involved a matter of public or private interest. If public, the defendant must have acted with malice or reckless disregard,247 but if it is private, a defendant negligently failing to ascertain the mistruth is sufficient.248 As to disinformation, this could mean a person might face liability if he failed to check the truth and origin of every statement before he posts it. This seems implausible in practice. Moreover, the reckless requirement described above is an achievable pleading standard. It is the right standard to apply in this law.

Setting the required mens rea at knowing or reckless compels a tougher question: whether news providers should receive more protection than others. It is well recognized that the press is a unique institution under the First Amendment,249 but there is little case law interpreting what protections the free press clause actually adds.250 With a few exceptions, the “protections offered to the institutional media have long been . . . no greater than those offered to others.”251 One consequence of this law is that news institutions might become obvious defendants. Plaintiffs would likely file claims against tech platforms, publications, or individual reporters, and some plaintiffs might have suffered no injury, but instead, be maliciously using the statute to hurt the press. This threat is magnified by already-existing public skepticism toward journalists.252 With this in mind, holding a journalist or publisher to even a knowing standard could arguably be too low to avoid the risks. Perhaps the law should require a heightened, intentional mens rea specifically for the press.

In considering this risk, on one hand, journalists’ greater vulnerability is balanced by greater resources and know-how in confirming information’s source. Not only do journalists typically have the facilities and training to double-check a story’s accuracy, they typically have an ethical duty to so.253 Worryingly, creating a carveout for journalists overlooks the ease with which bad actors might take refuge in falsely proclaiming themselves press. A significant technique already present in foreign disinformation attacks is the creation of false news sites.254 Requiring different treatment for the press would necessarily force the court into the dangerous territory of interpreting what journalism is.255

On the other hand, most journalists are required to work at breakneck speed.256 The traditional factchecking expectations are not as strong as they once were,257 and moreover, for freelance journalists lack the financial wherewithal to meet the same standards as, say, The New York Times.258 There is also a theme in free press clause interpretation, in cases like Gertz v. Robert Welch, Inc.259See 418 U.S. 323, 342 (1974) (“[W]e have been especially anxious to assure to the freedoms of speech and press that ‘breathing space’ essential to their fruitful exercise.”). and New York Times Co. v. Sullivan,260 that seeks to grant journalists First Amendment “breathing space,” creating the freedom reporters need to make occasional mistakes in order to adequately inform the public.261

The balance of these concerns is a delicate one, but it lays gently against heightened protections for journalists. As for breathing space, however true that claim is in broad strokes, that argument has limits. A judge might certainly weigh the media’s value to society when exercising her discretion over a journalist’s culpability as a defendant, but as Justice White once wrote, “Nothing in the central rationale behind New York Times demands an absolute immunity from suits.”262 Moreover, the reckless mens rea is not an incredible burden. It is hard to imagine a reporter who does her job faithfully would not make some minimal effort to support an article’s claims with facts. One could not fairly say that a journalist who confirms a typical quote once, but does not double or triple check it, always acts recklessly. Most professional or aspiring journalists would never need the protection that an intentional mens rea would afford.

But the greatest factor weighing against a heightened intent standard is how enemies might abuse it. While courts might mistake some upstart and legitimate news sites as propaganda, the greater danger is in attackers hiding behind the journalistic veil of established sources. Consider the tough cases of defining official state news entities like RT and China Daily. It would not be hard for such firms to use a heightened mens rea to shield their malicious acts. After all, it was a mainstream Russian news source, Russia One, that was responsible for the Lisa case—the false story about immigrant gang rapists in Germany.263 For all these reasons, it seems prudent to avoid a special protection for journalists at the outset. Still, it is also fair that wise judges, acting on their discretion, would exercise stricter scrutiny in hearing a suit against CNN, for instance, versus one against a one-week-old website written in broken English.

D. Establishing Standing and Causation

What must the plaintiff do to meet her burdens on standing and causation? Does a plaintiff have standing by reading one article later exposed as propaganda? Can a reader who changed his presidential vote based on a series of false articles prove that those articles were the proximate cause of some harm?

For standing, the required level of injury is set in traditional Article III jurisprudence: the probabilistic test from Lujan v. Defenders of Wildlife.264 “First, the plaintiff must have suffered an ‘injury in fact,’—an invasion of a legally protected interest which is (a) concrete and particularized and (b) ‘actual or imminent, not “conjectural” or “hypothetical.”’”265 Second, there must be a causal link between the injury and the disputed conduct—that is, the injury must be fairly traceable to the defendant’s conduct and not the result of independent action by some third party. Third, the cited harm must be “likely,” not “speculative.” And fourth the injury must be “redress[able] by a favorable decision.”266

In Lujan, plaintiffs did not establish standing because they could not prove imminent injury; environmental organizations whose members once traveled to and hoped to return to environmentally threatened areas were not injured by the United States withdrawing ecological funding in those areas.267 The plaintiffs also failed to show redressability because the U.S. agencies that they sued provided less than ten percent of the foreign funding, there was nothing to indicate that the projects would be suspended without the money, and there was no indication that endangered species would be further harmed.268 In the same Sixth Circuit case that dealt with mens rea above, Crosby v. Twitter, Inc., the court applied a similar standing test in the terrorism context, and held that plaintiffs had not established standing at pleading because they had not offered enough facts on the causation prong to show that the attack was fairly traceable to Twitter’s conduct.269 Merely hosting provocative ISIS content on its platform was not enough.

One can see how the standing requirement might play out with the hypothetical disinformation-fueled rally in Section IV.B. First, the plaintiff would have to show an actual injury: anguish from merely watching the rally on TV is not enough, but attending and being injured, or if foreign agents coopted her into using her business services to support the rally, that would be enough. Second, the plaintiff would have to show that the disinformation was a proximate cause of the rally taking place. Third, she must provide evidence of her physical or reputational damages; affidavits and receipts could do that. Finally, she must show that those injuries are redressable through money damages—a prong that will be consistently easy to meet under this statute because the very purpose of a money suit is to provide material redress.

As to causation, the statute borrows its standard from defamation law. The disinformation plaintiff, like in a defamation case, must show that the defendant’s actions were a “substantial factor” in the resulting harm, but those actions need not be the sole cause.270 By further incorporating a pure comparative negligence standard into this law, a plaintiff’s contributory negligence or mistakes by other parties could not serve as a complete defense.271 The factfinder would have to analyze the circumstances and decide what percentage of responsibility the defendant or others bear, and what amount falls onto the plaintiff themselves.272 The plaintiff’s own liability would not defeat liability, but if high enough, it might reduce causation so much that the judge would find it insubstantial enough to proceed beyond pleading. The adoption of a comparative negligence rule is likely essential if the law is to have any real-world effect. Since 2016, foreign disinformation purveyors have increasingly comingled their product with domestic actors, to hide their role and to magnify the legitimacy of their message.273 More recent efforts have involved foreign states coopting local activists into posting the unfounded messages, or foreign actors directly supporting existing extremists to promote homegrown societal division.274 Contemporary disinformation campaigns, to abuse a phrase, often take a village.275

Imagine the causation prong in practice. An average voter exposed to a multitude of disinformation who claimed she therefore chose a candidate that she might not otherwise have preferred would likely be unable to muster the necessary evidence to tie her vote substantially to one defendant. Contrast this with a voter who claimed that a single, foreign-funded, local troll posted false information about polling sites on election day, leading to long lines and ultimately disenfranchising her. Here, even if other people reposted the false message and the voter could have waited longer in line, evidence like news reports and her employer’s strict late-attendance policy would likely be enough to meet the substantial factor test. On the far end, although the comparative negligence calculation would be exceedingly hard, it seems Hillary Clinton would also have a plausible shot at proving causation in her 2016 run based on available evidence: the Senate Russia report, news reports, and polls showing the effect of disinformation on voters.

E. Assessing Damages

The challenge of assessing damages is two-fold: first, measuring the types of harm that disinformation might cause; and second, determining an appropriate remedy for that harm.

First, when measuring disinformation’s harm, antiterrorism suits unfortunately offer less guidance.276 Unlike a terrorist attack, disinformation cannot sever limbs. Disinformation’s worst material harm would likely be financial. Otherwise, a court must look to nonmaterial harm. This proposed statute allows plaintiffs to plead both types. As a concrete example, take the Lisa case, in which Russian disinformation compelled Germans to protest the local police.277 Under the anti-disinformation statute, shop owners at the protest site could have argued financial harm by showing diminished receipts that day and testimony from repeat shoppers who said that the protests kept them away. The city could prove harm by showing overtime expenses for police assigned to the protest. The police chief could have offered evidence of time and money he spent to publicly defend himself against false accusations. Each of these injuries would be measurable.

Plaintiffs could also make nonmaterial claims. Here, plaintiffs could draw lessons from other torts, such as defamation and similar privacy invasions.278 In those cases, plaintiffs may offer evidence of reputational harm,279 public exposure,280 and mental anguish.281 For example, a plaintiff seeking damages for slander might testify and offer witnesses to corroborate how the statements affected him emotionally, turned the community against him, fueled backlash at work, and caused physical angst.282 To put this in a disinformation context, consider the political response to the 2020 George Floyd killing and subsequent protests in which some conservative activists sought to paint the Black Lives Matter movement as violent.283 Russian and Chinese disinformation trolls contributed to that one-sided portrayal.284 Members of actually peaceful groups accused of violence might show reputational harm attributable to the disinformation with documentation of their opponents’ tweets, for instance, reposting and spreading those disinformation claims.285

The second issue then is the appropriate penalty: how to assign a dollar value to the harm. This statute aims to deter frivolous lawsuits by setting fair damage limits. Tort law offers a menu of possible ways to calculate loss. It is helpful here to divide the harm from disinformation into primary and secondary costs.286 The primary costs are the result of the direct injury, which for disinformation might be financial harm from business losses, data breach, or property damage (if the disinformation results in other parties taking physical action). Courts normally remedy these costs through compensatory damages. Measuring these costs are straightforward using financial statements, receipts, etc. Secondary costs include the nonmaterial harms cited above, such as reputational and emotional, which are necessarily harder to measure. Courts compensate these costs with punitive damages. Punitive damages that far exceed compensatory grew controversial and prompted a tort reform backlash in the 1990s.287 And yet, in 2008, as Congress further expanded anti-terrorism protections, it amended the FSIA statutes to expressly allow punitive damages.288 Judges and juries typically calculate punitive damages in one of five ways,289 but in lieu of exploring the options in depth, it suffices to note which technique the Antiterrorism Act uses: a multiple of compensatory damages and, specifically, treble damages.290 Therefore, whatever primary costs the plaintiff proves, for which the court would award compensatory damages, the court may add three times as much as an additional punitive remedy.

The proposed disinformation law aims for a damages amount sufficient to create an effective deterrent and to remunerate the victim without enabling unfair windfalls prompting false suits. Accordingly, at the outset, it uses a maximum award of treble punitive damages plus compensatory. That amount has proven broadly effective in balancing the motivation and costs to parties in antiterrorism suits.291 One might criticize the analogy here because the direct costs from most terrorist attacks are far greater than in a disinformation claim. But the lesser relative harm and, therefore, punishment in this context is expressly the purpose of treble damages: if disinformation hurts less, any given claim should impose less liability. Limiting damages also discourages misuse of this statute. Finally, because disinformation’s harm can be widespread, even though any one lawsuit might not impose great costs, the bulk of suits or a class action might have the same effect on the defendants and better spread the wealth to persons harmed.292

F. First Amendment Considerations

The proposed statute defies the biggest challenge to regulating disinformation: First Amendment conflicts. In fact, it overcomes two hurdles: malicious actors misusing it as a weapon against legitimate speech and culpable defendants claiming free speech as a valid defense.

First, as to avoiding misuse, this statute demands a pleading standard high enough that it would deter frivolous suits against speech that plaintiffs merely dislike. As already described, the required mens rea, causation, and harm showings will largely constrain this statute’s use to its designed purpose. It is true that incidents of misuse remain possible, even inevitable, but what stands in favor of this law is that the First Amendment harms from other disinformation solutions are worse. This statute does not impose blanket bans on certain types of speech or create opaque standards for acceptable speech by social media platforms’ self-appointed boards. It checks the process through the adversarial process and oversight by judges. If anything, the valid criticism against this proposal is that it is too onerous. It could leave much disinformation unchecked. But as the Supreme Court has said, “[E]rroneous statement is inevitable in free debate, and . . . it must be protected if the freedoms of expression are . . . to ‘survive.’”293

Second, this statute would survive a defendant’s First Amendment challenge. Even though First Amendment exceptions are narrow,294 two factors in the law—the requirement for a foreign source and the foreign actor’s malicious intent—ensure that plaintiffs can succeed: foreign defendants would lack First Amendment protection entirely, and speech with malicious intent might be so likely to incite unrest or violence that silencing it would survive a court’s heightened scrutiny. Here, a different line of antiterrorism legislation again provides a useful roadmap: material support.295 Litigation around that law reveals the First Amendment exception into which disinformation rightfully falls.296 Even where a court applies heightened scrutiny, proven disinformation will likely be unprotected.297

Material support for terrorism is prohibited under 18 U.S.C. § 2339A and § 2339B. The Court has litigated its First Amendment concerns in Holder v. Humanitarian Law Project (HLP) and its subsequent line of cases.298 The issue in HLP was whether mere speech, advocacy for a terrorist group’s goals, can be fungible and constitute material benefit to the terrorists. Importantly, the Court decided that the issue was speech itself and not otherwise prohibited conduct.299 In HLP, the plaintiffs provided legal advice and other types of expertise in support of allegedly lawful activities of two designated terrorist groups. The Court found that in applying § 2339B, a “demanding standard” of scrutiny should apply.300 While the Court did not define what it meant by “demanding,” it placed that measure somewhere between the intermediate and strict scrutiny tests.301 And in analyzing the support that the plaintiffs provided the groups—training in use of humanitarian and international law, as well as political advocacy—the Court held that by even indirectly empowering terrorists to commit more crimes, the plaintiffs were conferring a measurable, illegal contribution.302 In other words, even with mere speech, plaintiffs were furthering illegal terrorist conduct. And though this material support statute was a content-based restriction, legitimate government interests outweighed the harm of imposing restrictions in this instance.

Turning to the disinformation statute, two conditions would defeat a defendant’s First Amendment argument. First, if the defendant were a foreign state actor, the target would likely have no First Amendment protection. “The Bill of Rights is a futile authority for the alien seeking admission for the first time to these shores.”303 First Amendment protections do not extend to foreign actors if they are speaking into America from abroad or have entered illegally and are deportable.304 The First Amendment also offers no refuge for noncitizens interfering in core civic functions like elections.305 And U.S. citizens have only a limited First Amendment right to receive and distribute foreign materials inside the United States.306 A plaintiff without any constitutional barrier could use this law to sue a foreign speaker, be it a state or state subsidiary. That leaves one set of likely defendants unprotected.

Second, if a plaintiff instead sued a U.S. resident under this statute, an American amplifier for instance, the statute’s factual mistruth and malicious intent prongs would still enable it to survive a court’s scrutiny. In that scenario, depending on the facts, the speech would either be entirely unprotected, or regulatable subject to heightened scrutiny.307

The speech might be entirely unprotected because the factual mistruth prong will often place the defendant’s actions in the same category as defamatory statements. There, the speaker lacks First Amendment protection if he acted with actual malice, meaning “knowledge that [the statement] was false or with reckless disregard of whether it was false or not.”308 Actual malice can be a hard standard to prove and perhaps more so in the disinformation context. In one slander case, the South Carolina Supreme Court held that a police chief had not shown a reporter’s actual malice in publishing an opinion column that said the chief took bribes, even though the chief alleged that the reporter failed to investigate the source of an anonymous call that sparked the story and the reporter had expressed ill will toward the police chief.309 In contrast, the Virginia Supreme Court held that a TV station had shown actual malice by airing a report accusing a doctor of sexual assault, because the TV station knew that a source had retracted one of the statements on which the story was based, and the medical board had cleared the doctor of accusations.310 For disinformation, to meet the actual malice standard, if the defendant were a public figure who used disinformation to tar the plaintiff, for instance, that plaintiff would need to show more than recklessness on the defendant’s part; she must show that the defendant should have known it was a lie. This might require proof that the public figure had seen reliable evidence to the contrary or knew that the underlying disinformation was untrustworthy. With discovery, and for domestic defendants, this is not an impossible burden—but it is steep.

If the plaintiff cannot overcome First Amendment protection based on falsity and actual malice, she might do so based on the defendant having advocated violence or unrest. As discussed in Section II.B, First Amendment exceptions emerge with speech that is tantamount to or leads to conduct legitimately proscribed, punished, or regulated under other statutes.311 The distinction a court must draw is whether the speech is pure advocacy, or whether it violated or compelled another to violate a law.312 An example of disinformation that itself constitutes illegal speech is statements that qualify as seditious conspiracy, that is speech conspiring to use force to overthrow the government.313 In one such case, the Second Circuit upheld the conviction of a lecturer at a Virginia Islamic center for inducing others to levy war.314 Another example of unprotected speech in the antiterrorism context are true threats. “‘True threats’ encompass those statements where the speaker means to communicate a serious expression of an intent to commit an act of unlawful violence to a particular individual or group of individuals . . . [though the] speaker need not actually intend to carry out the threat.”315 In United States v. Viefhaus, the Tenth Circuit rejected a First Amendment appeal in a conviction for making a bomb threat, holding that the defendant’s speech crossed the threshold from political rhetoric to criminal threat when he stated that fifteen cities would be bombed.316 Thus, examples of disinformation that would avoid First Amendment barriers under the seditious conspiracy or true threat exceptions might include foreign disinformation advocating and enabling violent insurrection or calls to stage a rally where the actor pits sides prone to imminent violence against each other, like in the instance of the IRA promoting simultaneous pro- and anti-Islam marches in Texas.317

This proposed statute, by requiring the plaintiff to show that the defendant had both intended to and successfully inflicted harm, incentivizes plaintiffs to target speech that falls into a fully unprotected category.318 But even if the harm was nonmaterial, the defendant claimed the law was restricting her expression of a viewpoint, and a court applied heightened scrutiny to the statute itself,319 the statute’s intent prong would lessen the defendant’s constitutional interests as weighed against society’s concern for security and civic trust. The Court in HLP applied demanding scrutiny to § 2339B’s content-based speech and found it constitutional.320 This standard is something more than intermediate scrutiny—inquiring whether the contested law (1) advances important government ends; (2) is substantially related to advancing those ends; and (3) is not substantially more burdensome than necessary321—but less than strict scrutiny—the restriction (1) “furthers a compelling interest”; and (2) “is narrowly tailored to achieve that interest.”322 Whether tested facially or as-applied, the disinformation statute falls within those standards and survives the challenge. As for important government ends, whether the court made the assessment itself or looked to see if Congress had done so,323 the need to prevent foreign meddling that leads to election interference or social unrest, for instance, are core societal needs, akin to the national security concerns expressed in HLP with terrorism. The law is substantially related to those ends in that it applies to speech strategically designed to undermine society, and it is strictly limited in its use against protected speech. Also, while the statute regulates content, it curtails only that speech which is false and furthers criminal purposes. Finally, if a court weighed minimal burden, that test is met because of the law’s difficult pleading requirements, limiting the likely frequency of suits. An as-applied challenge would necessarily be fact dependent, but this statute easily withstands most free speech claims that a defendant could muster.

Considering the material support, statute as precedent strongly illustrates that even though the terrain is fragile, this proposed law, properly administered, would appropriately balance First Amendment and national security interests.

Conclusion

Picture next year’s Thanksgiving dinner. You are sitting around the holiday table with friends and family. You talk about the kids, travel, and football. Then someone says, “Of course, we elected Obama, and he was an immigrant Muslim,” and you realize it was no joke. Or your uncle says, “George Bush had a plan to let Black people die in Katrina—I’ve seen the data.” The conversation stops. Or worse, someone across the table begins to argue. When lies become truth, community falters, even inside our own homes.

If we trace back to the source of mistruth, we can stanch the flow of harmful ideas. It is at that stage, far from the dinner table, outside of personal or even publisher’s control, that a citizen army is needed. This Article’s proposed statute is a government-enabled tool to help individuals strike back. And like the idiomatic Dutch boy with his finger in the dike, one person alone will not fix the disinformation problem, but with many victims arguing their private right in court, the combined effort could be enough to force bad actors to retreat.

This Article defined disinformation, described its historic roots, and showed why it is a greater threat now than ever. It noted potential solutions and showed why none had fully met disinformation’s challenges or could do so alone. It proposed a new solution: a private right of action to claim money damages against those who knowingly disseminate mistruth. Finally, it highlighted analogous antiterrorism laws that paved the path for this law to follow, then element by element, showed how the law would overcome obstacles on which the other solutions had faltered.

Even if this analysis does not close the door on every concern, it presents a largely turnkey response. It helps focus the debate and perhaps seeds new solutions where gaps remain. Disinformation is a systemic threat: its harms are sinister in that each occurrence looks minor, but its cumulative damage is stark. That threat will grow with time, or become worse if policymakers act unwisely in fighting it. A private right of action is a unique and powerful tool to unleash.

 


* Ari B. Rubin is a term law clerk to the Hon. Roy B. “Skip” Dalton, Jr. of the U.S. District † Court for the Middle District of Florida. He previously clerked for Chief Judge Matthew J. Fader of the Maryland Court of Special Appeals. Prior to law, the author was a film and TV writer and producer, and he has written opinion pieces for publications including Politico, The Huffington Post, and The New York Daily News. The author would like to thank Professor Erin Carroll. She is both a teacher and a mentor. Georgetown Law, J.D. 2020; Wesleyan University, B.A. 2002.