Fake

The Internet today is full of fake people and fake information. Trust in both technology and institutions is in a downward spiral. This Article offers a novel comprehensive framework for calibrating a legal response to technology “fakery” through the lens of information security. Introducing the problems of Internet “MIST”—manipulation, impersonation, sequestering, and toxicity—it argues that these MIST challenges threaten the future viability of the Internet through two morphed dynamics destructive to trust. First, the arrival of the Internet-enabled “long con” has combined traditional con artistry with enhanced technological capability for data collection. Second, the risk of a new “PSYOP industrial complex” now looms. This chimera fuses techniques and employees from military psychological operations with marketing technologies of message hyper-personalization in order to target civilian audiences for behavioral modification. To address these two problematic dynamics through law, the Article constructs a broader theory of Internet untrustworthiness and fakery regulation.

Legal scholarship currently conflates two materially different forms of trust—trust in code and trust in people. As such, first, the Article imports the distinction from computer science theory between “trusted” and “trustworthy” systems. Next, engaging with the work of marketing theorist Edward Bernays, philosopher Jacques Ellul, and the theory of illusionism, this Article explains that determinations of intent/knowledge and context can serve as guideposts for legal paradigms of “untrustworthiness.” This recognition offers a path forward for legal analysis of technology fakery and Internet harms. By engaging with multiple threads of First Amendment jurisprudence and scholarship, the Article next sketches the First Amendment bounds for regulation of untrustworthy technology content and conduct. Finally, this Article presents a novel framework inspired by the philosophy of deception to frame future legal discussions of untrustworthy technology fakery—the NICE framework. NICE involves an evaluation of three variables—the legal nature of the technology content or conduct, the intent and knowledge of the faker, and the sensitivity of the context. Thus, NICE leverages traditional legal models of regulation to the greatest extent possible in addressing technology fakery. This Article concludes with examples of regulatory approaches to technology fakery informed by the NICE framework.

Introduction

With traps and obstacles and hazards confronting us on every hand, only blindness or indifference will fail to turn in all humility, for guidance or for warning, to the study of examples.

—Justice Benjamin N. Cardozo1

Ours is the age of the Internet clapback and the competing dank meme. But, despite our intuitions of novelty, our current struggles with information integrity and technology are merely the latest round of a recurring historical contest.2 We have always been at war with technology fakery.

In the 1830s, the arrival and mass adoption3 of the telegraph brought with it the rise of conspiracy theories,4 even disrupting the dynamics of congressional constituent communications.5 As argued by Professor Joanne Freeman, “[t]he telegraph was the social media of its day,” and it “spread journalistic hot-takes throughout the nation with greater reach and speed than ever before.”6 For example, partially because of the telegraph,7 members of Congress no longer controlled the reach, speed, or integrity of the information around their affairs of honor,8 and political upheaval further fomented as a result.9

Parallel dynamics repeated with the arrival of radio and television in the United States. In 1907, key breakthroughs in amplification tube technology paved the way for the first image to be instantaneously transmitted through telegraph wires.10 This shift also enabled the arrival of radio and the first moving images transmitted by television in the 1920s.11 By the 1930s, radio had reached broad public adoption, and fake information, conspiracy theories, and the “snake oil” salespeople12 of earlier eras took to the airwaves to perpetrate fraud.13 Innocent confusion also existed: in 1938, a broadcast work of science fiction, The War of the Worlds, caused confusion when listeners tuned in to hear a fake journalist seemingly provide real-time updates about an alleged alien invasion of Earth.14 Smallpox and polio vaccine disinformation and misinformation in print predated COVID-19 vaccine disinformation and misinformation online.15

We too live in an era where a new technology has amplified the reach of dueling words and images and where fakery happens in both real time and on a time-shifted basis. Speed and amplification of information—both fake and real—have increased as computing power has increased.16 And just as in prior eras, the risks of corrupted information damaging public discourse loom large.17 Internet fakery has already impacted the operation of our markets,18 our republic’s governance and national security,19 and the public’s sense of trust in the Internet and institutions more generally.20

No comprehensive theory of technology “fakery” currently exists in legal scholarship that effectively merges traditional and modern jurisprudence. As such, this Article offers a novel comprehensive legal framework to conceptualize fakery in technology contexts as an information security problem. Part I introduces the problem of Internet “MIST” or, more colloquially, the “Four Tarantulas of the Fakopalypse”—a modern variant of the 1990s meme of the Four Horsemen of the Infocalypse. It argues that four “tarantulas” of fakeness—manipulation, impersonation, sequestering, and toxicity—create a “MIST” of Internet fakery that harms trust. Part I then explains that two key dynamics complicate today’s Internet fakery problems: the arrival of the Internet “long con” and the risk of creating a “PSYOP industrial complex.” Using classical con-artistry theory, Part I first explains that the Internet “long con” merges timeless techniques of con artists with new information exploitation opportunities to more efficiently identify and exploit “suckers” or “marks.” Second, using theory of propaganda and psychological operations (PSYOP), Part I explains that the techniques of Internet marketing have begun to blend with techniques and personnel from military psychological operations. This merger risks the rise of a “PSYOP industrial complex”—an enterprise that targets audiences for fakery and behavioral modification with progressively greater precision. In other words, psychological manipulation techniques learned from militarized operations in warfare appear to be transferring into use on civilian populations. These two dynamics have been instrumental in the progressive erosion of trust visible on the Internet today. Gleaning insights from marketing theory of Edward Bernays, the work of lawyer and philosopher of technology, Jacques Ellul, and the theory of illusionism, Part I concludes by reframing these dynamics, highlighting two elements that have historically underlain the assessment of untrustworthy content or conduct: the intent and knowledge of the faker and the context of the technology fakery.

Part II begins to map viable paths forward to halt Internet trust erosion due to fakery. It first introduces a core definitional distinction from computer science that is currently absent from the legal literature’s discussion of technology fakery—the critical distinction between being trusted and being trustworthy. It argues that legal approaches to fakery should focus on the regulation of untrustworthiness. Turning next to the philosophy of trust, Part II differentiates assessment criteria for trustworthy versus untrustworthy code, objects, and people. Finally, Part II concludes by engaging with multiple threads of First Amendment scholarship and case law to identify the First Amendment limits for regulation of untrustworthy Internet content and conduct.

Part III then merges the insights from prior Parts to offer a reframe for legal discussions of untrustworthy Internet content. This new framework, NICE, involves an examination of three variables—the legal nature of the fakery, the intent and knowledge of the faker, and the sensitivity of the context into which the fakery was injected. Part III then offers examples of regulatory approaches informed by the NICE framework for each category of Internet fakery/MIST.

I. The Internet Winter of Our (Dis)Content: The Tangled Web of Trust

And thus we arrive at Lucian’s weakness[,] . . . . a misguided admiration of the truth . . . . [and] those spiders, of mighty bigness, every one of which exceeded in size an isle of the Cyclades. “These were appointed to spin a web in the air between the Moon and the Morning Star, which was done in an instant, and made a plain champaign, upon which the foot forces were planted.” Truly a very Colossus of falsehood . . . .21

The earliest known work of science fiction in Western literature was a satirical novel.22 Written in the second century, Lucian’s True History by Lucian of Samosata chronicles the adventures of Lucian and the people of Earth as they battle invading attackers23 and giant spiders with a country-sized24 web.25 Now, we, the people of Earth of the twenty-first century, face our own epic battle against our own Colossus of falsehood with a web in the air26—technology fakery.

Much has been written about the various legal challenges presented by fake Internet content and conduct. This ample body of existing scholarship has generally approached the dynamics of various forms of technology “fakery” in a segmented manner. The most ambitious scholarship has primarily focused on either the private sector dynamics of the Internet economy and its relationship to data27 on the one hand and the political impact of propaganda and attempted election manipulation on the other.28 Professor Julie Cohen presents an insightful discussion of political economy, applying perspectives of Karl Polanyi on the overall dynamics of the mutual constitution of the technology economy and its relationship to law.29 Professor Cohen highlights the appropriation of intangible resources and “performative enclosure” driven by repetition,30 as it connects with commodification/datafication; thus, her work reframes the response in law, particularly as technology platforms become increasingly prevalent.31 By comparison, Professor Shoshana Zuboff offers a critique of the business dynamics of the data economy through the lens of what she terms Internet “surveillance capitalism,”32 arguing that technology-mediated surveillance is an “instrumentarian power that fills the [trust] void, substituting machines for social relations, which amounts to the substitution of certainty for society.”33 However, both authors stop short of connecting their analyses with broader information security or national security concerns.34 Meanwhile, Professors Yochai Benkler, Robert Faris, and Hal Roberts present a thoughtful analysis of the role of fake information in the 2016 U.S. presidential election.35 In particular, the authors assert that they “believe there is an advantage to keeping separate the domain of politics, with its normative commitment to democracy, from the domain of commerce, and its normative commitment to welfare, consumer sovereignty, and consumer protection.”36 This Article builds on these scholars’ noteworthy prior work. However, it adopts an intentionally contrary approach.

This Article instead expressly merges the dynamics that have been previously differentiated by other scholars: it presents an analysis that assumes the interweaving of the political/national security domain and the commercial one in its approach to technology fakery. Specifically, this Article explains that these two domains—commercial and political fakery—have always been functionally interwoven, as a matter of both technological and historical practice. Further, as Part II explains, they are converging as a matter of First Amendment jurisprudence. Thus, we argue, a single broader framework of technology fakery is needed for combating Internet fakery. We offer one such approach, the NICE framework, in Part III.

But before we arrive at concrete proposals in Part III, we must start our discussion with an articulation of the meaning of the term “Internet fakery.” The Section that follows defines “Internet fakery” as the problems of “MIST”—manipulation, impersonation, sequestration, and toxicity—and explores them through the metaphor of tarantulas—a colloquial homage not only to Lucian and his epic second-century–science-fiction spiders but also to early Internet history.

In one of the earliest articulations of Internet harms, in 1998, engineer and author Tim May argued that regulation of the Internet would be driven by a fear of the “specter of crypto anarchy” and the availability of technology tools that facilitate anonymous communication.37 In particular, these new tools, argued May, would likely raise concerns from law enforcement over stopping the “Four Horsemen of the Infocalypse”—terrorism, child exploitation, drug dealing, and money laundering/organized crime.38 While intended by May as tongue-in-cheek-hyperbolic commentary, these observations were largely accurate in predicting the first two decades of Internet law.39 Combining Lucian of Samosata’s giant spiders with May’s Four Horsemen of the Infocalypse, we might colloquially recast our problem of Internet fakery and MIST—manipulation, impersonation, sequestering, and toxicity—as the Four Tarantulas of the Fakopalypse.

A. The Problem of Internet MIST (or The Four “Tarantulas” of the Fakopalypse)

In the mid-1990s, the Silicon Valley offices of AT&T suffered from a physical security (literal) “bug” race condition40: the company’s basement was overrun by a colony of tarantula spiders.41 Mostly unbothered in their daily routines, AT&T employees (and the spiders) continued their work unflummoxed; the employees simply chose to cede operational control of the basement. As recounted by former employees, they avoided entering the new “tarantula cave” and just hoped for the best.42 But, unfortunately for the AT&T employees, their tarantula detente was destined for failure: tarantulas tend to mate rather efficiently,43 and they demonstrate comparative longevity, with females living up to thirty years.44 It was likely only a matter of time before the tarantulas expanded their territory into the workspace inhabited by humans.45 Yet, despite this looming spider invasion afoot (and underfoot), superficially the office appeared to be working as usual.46

While the last two decades have been characterized by restrained Internet regulatory action, staying this course places us and the future of the Internet, at best, in the position of the AT&T employees—in an unsustainable detente.47 We have reached the equivalent of the “tarantula cave” era of technology fakery. Much like the specter of an impending arachnid incursion,48 deficits of user trust loom large over the future of the Internet. Fake news,49 fake videos,50 fake users,51 fake reviews,52 fake websites,53 and fake services54 with fake offers,55 fake algorithms,56 and fake results57 seem omnipresent on the Internet. As explained by a recent study from the Pew Internet and American Life Project, “Trust has not been having a good run in recent years, and there is considerable concern that people’s uses of the Internet are a major contributor to the problem. For starters, the Internet was not designed with security protections or trust problems in mind.”58 Seventy-nine percent of Americans say they are not confident that companies will admit mistakes and take responsibility for misuse or compromise of information.59 Similarly, “[eighty-one percent] of the public say that the potential risks they face because of data collection by companies outweigh the benefits.”60 These drops in Internet trust also coincide with eroding trust in institutions and government generally,61 as fake Internet content helps exacerbate a self-reinforcing trust spiral downward.

At least four kinds of problematic technology “fakery” are visible today—the problems of manipulation, impersonation, sequestering, and toxicity (MIST).

1. The Problem of Manipulation

Nearly three decades ago, a famous New Yorker cartoon once announced, “On the Internet, nobody knows you’re a dog.”62 Today, the ability to track individual users online has increased, as has the technological ability for companies and national security experts to attribute speech and conduct.63 However, the challenges faced by average users in parsing whether to trust content and persons they encounter on the Internet have arguably become even more formidable. In particular, users regularly encounter content that has been manipulated, either in substance or in its superficial attribution.64

Borrowing the lens of computer security, the challenges of fake and manipulated technology content might be reframed more elegantly and simply as (the latest iteration of) problems of information integrity in the technical sense.65 These problems of manipulation might be divided into two categories—manipulation of content and manipulation of authenticity.66

a. Manipulation of Content

The film Catch Me if You Can67 describes the adventures of con artist Frank Abagnale Jr., a faker so skilled that allegedly a police chief once quipped: “Frank Abagnale could write a check on toilet paper, drawn on the Confederate States Treasury, sign it ‘U.R. Hooked’ and cash it at any bank in town, using a Hong Kong driver’s license for identification.”68 According to the book of the same title by Stan Redding with Abagnale, Abagnale allegedly landed a job at the Louisiana State Attorney General’s office after passing the bar exam but providing a forged law transcript from Harvard University.69 As this situation of a potentially forged law transcript illustrates, the problem of fake information is not a new problem arising from the Internet.70 While the law has perhaps most often analyzed the idea of manipulation in the context of document forgery71 and market manipulation in securities regulation,72 legal concern and scholarship over manipulation of information (under various definitions) has also arisen in the context of currency,73 art forgery,74 identity documentation,75 and other legal situations.76

At least three types of manipulated Internet content concerns have received meaningful discussion in the legal scholarship—fake news,77 fake videos, and fake reviews. Legal scholarship has addressed the intent of fake news creators78 and its distortion of the electoral process,79 and it has described fake news as leading to a “cascade of cynicism,”80 a broader “distrust for all media production,”81 and a distrust of expertise in general.82 For example, Professor Alice Marwick has highlighted that the function of identity signaling often overrides concerns over accuracy for some participants in fake news purveyance,83 complicating the viability of the two primary solutions proposed for combating fake news84—fact-checking and media literacy.85 Additional scholars have highlighted the role of private versus public sector interventions in addressing manipulated content86 and argued that current First Amendment doctrine appears to protect fake news.87 As explained by two scholars, “The abundance of fake news is accompanied by claims that unfavorable but factual news is itself fake. By sowing seeds of distrust, false claims of fake news are designed to erode trust in the press . . . .”88 Still, other scholars have raised concerns that the Supreme Court’s commitment to the free flow of information under current First Amendment doctrine may limit the efficacy of policy interventions to limit fake news, even under the auspices of fraud prevention.89 Further, the copyright status of fake news has also been the subject of some legal analysis,90 raising the specter that the fictionalized nature of manipulated content might perversely grant it a form of potentially superior intellectual property status over truthful information.

Legal scholars have also devoted substantial thought to the context of fake and manipulated videos, such as “deep fakes.” As recent press coverage explains, with the help of readily available video editing tools, video content may be faked or manipulated to create a false impression of the drunkenness of a public figure91 or to time-shift content into a more desirable version of events.92 As explained by Professors Robert Chesney and Danielle Citron, one of the risks represented by deep fakes arises from a form of “Truth Decay” and a “Liar’s Dividend”—that “wrongdoers may find it easier to cast doubt on real recordings of their mischief.”93 Professors Mary Anne Franks and Ari Waldman argue that “deep fakes undermine free speech itself, at least of its targets. . . . [and] weaponize targets’ speech against themselves, harvesting their photos, videos, and audio recordings to create increasingly realistic, fraudulent representations,” and that “deep fakes erode the trust that is necessary for social relationships and political discourse.”94 Professor Nina Brown argues that manipulated content is “a significant threat to global stability” and that “[t]hese harms are not speculative.”95 Professor Jonathan Schnader raises the concern that Internet-enabled devices such as home assistants could be compromised and that “through the audio or video obtained through Alexa, deep fake audio or video could be used as blackmail material, held ransom until a person pays money or completes a task for the blackmailer,” i.e., presenting a problem with potentially direct national security implications, depending on the role of the target.96

A third category of manipulated information considered by the legal literature relates to fake reviews and their commercial harms. Scholars have described fake reviews as “the scourge of the reputation system”97 and review systems as inherently manipulable.98 Professor Eric Goldman describes a “mediated reputation system[]” as one with “third-party publisher[s] [that] gather[], organize[] and publish[] reputational information.”99 Professor Yonathan Arbel argues that in these mediated reputation systems, a mismatch exists “between the private incentives consumers have to create reputational information and its social value” and that consequently reputational information is “beset by participation, selection, and social desirability biases that systematically distort it.”100 Professor Emily Kadens explains that “[o]nline reviews . . . tend to be overwhelmingly positive or negative, with few reviews in the middle” and that this “‘regression to the extreme’ creates reputations that do not reflect a normal distribution of perspectives,”101 raising questions of their trustworthiness. As explained by Professor Lori Roberts, “[S]ome businesses have gone on the offensive to preempt negative reviews to which they cannot effectively respond by incorporating non-disparagement clauses to their terms and conditions for a purchase or service, effectively barring any negative consumer comments.”102

Indeed, in response to the business tactic of precluding negative reviews through end user license agreements (EULAs), Congress passed the Consumer Review Fairness Act103 in 2016.104 In this way, Congress has begun to craft shared process-based benchmarks for conduct in a context with fake and manipulated information risks.105 Enforcement activity under the Consumer Review Fairness Act has begun, working in concert with continued enforcement under the FTC Act.106 For example, the FTC recently settled claims against a cosmetics company whose employees created sephora.com accounts and posted fake reviews.107 But as substance of fakeness108 is combined with questions of intent and quantification of harm, the analysis becomes more complex for legal purposes. In particular, legal valuation of injury and assessment of economic loss become challenging. These questions of valuation connect with the second type of manipulation—manipulation of authenticity.

b.  Manipulation of Authenticity

The New York art world harbors many tales of brilliant forgers109 and resellers,110 sometimes even resulting in fake art hanging on the walls of some of the most prestigious museums in the world.111 One master of manipulating art validation was New York art dealer Ely Sakhai112 who would acquire a minor work of a well-known master,113 have a duplicate painted,114 put the genuine certificate of authenticity from the original on the duplicate, and then sell the duplicate-plus-certificate on two different115 continents.116 The key to his success was his skillful manipulation of provenance.117

In perhaps surprising ways, the shortcomings of the process of art authentication and the spotting of forgeries and counterfeits are somewhat parallel to the manner in which source validation processes for information (fail to) happen on the Internet. Both the art world118 and the Internet rely on inherently social methods of validation—methods capable of being manipulated and faked.119 As explained by Dr. Neil Brodie,120 an art auction can be conceptualized more as a social process than as an event; the auction house actively pairs audiences of buyers and sellers while seeking out experts to validate the objects up for sale.121 He points out that unsavory information about the histories of objects is often not sought out or deliberately suppressed.122

In a similar spirit, the accurate identification of source is a question that the law regularly considers in various Internet contexts—from trademark law123 to certain types of campaign advertisements124 and contributions.125 In the legal scholarship, Professor Rebecca Green has applied a counterfeiting analysis to issues of Internet campaign speech. Recalling the historical attempts to pass off forged campaign material, she argues that “post-Watergate reform addresses distribution of forged campaign material. Yet it is not clear that it would cover technology-assisted counterfeits such as deep fakes.”126 Similarly, Professor Marc Blitz has argued in favor of analyzing questions of fake news as instances of forgery, arguing that “a distinctive type of harm may arise when the falsehood is not merely in the content of the speech that is intended to deceive, but is also in its purported source or vehicle.”127 Finally, Professors Jessica Silbey and Woodrow Hartzog128 point to insights from anthropologist Professor Graham Jones that “[t]he fake is only possible when there are normative, conventionalized, institutionalized standards of conduct and evidentiary practices that the faker can manipulate.”129 For these reasons in part, artists such as Banksy have begun to craft alternative social validation structures.130 Yet, even these approaches present vulnerabilities.131 Indeed, various failures of social validation are starkly visible among Internet fakery today.

Consider the role of celebrity endorsements and other acts of social verification on social media.132 For example, consider the dramatic meltdown (documented in real time on social media) of the ill-fated 2017 Fyre Festival—a functionally nonexistent music festival that had been billed as “bigger than Coachella”133 and hyped heavily by highly compensated Internet “influencers.”134 The fallout from the festival ultimately ended with the FTC sending out more than ninety letters to influencers and marketers regarding disclosure of paid endorsements135 and with the organizer, Billy McFarland (described by the press as “the rent-a-yacht version of Frank Abagnale”),136 being found guilty of wire fraud and sentenced to six years in prison.137 In the aftermath, some of the influencers in question have also been sued personally138 and reportedly were subpoenaed by the bankruptcy trustee attempting to sort through millions of unaccounted dollars that moved through Fyre Media.139

Indeed, in situations such as that of the Fyre Festival, Internet hype and social authentication fakery start to blend into dynamics of fake personas, group pressure, and imperfect information. This blending is harnessed by the remaining MIST “Internet tarantulas” of impersonation, toxicity, and sequestration.

2. The Problem of Impersonation

A fake ID works better than a Guy Fawkes mask.140

The second tarantula of MIST fakery is impersonation—a fraudulent art whose history is long and storied.141 Grifters have posed as concert promoters,142 doctors,143 lawyers,144 and skilled workers of various sorts,145 and they have regularly gone undetected. Perhaps most famously, a grifter named George C. Parker, who sometimes posed as General Grant’s grandson, sold the Brooklyn Bridge, the Met, the Statue of Liberty, and Grant’s Tomb to unsuspecting marks more than once.146 Indeed, his epic grifts became legend and synonymous with the concept of smooth-talking salespeople seeking to trick unsuspecting and naïve would-be entrepreneurs with get-rich-quick fantasies.147 His victims, unaware they had been conned, were sometimes informed of the crime as they attempted to construct tollbooths on access roads.148

On the Internet, we regularly encounter acts of impersonation. Twitter accounts may seem to be run by a herd of cows.149 Instagram feeds may appear to be managed by photogenic pets,150 and pseudonymous accounts are plentiful, run by humans using privacy-preserving handles, frequently for expressive effect151 or to protect against retaliation.152 But, the Internet has long faced a scourge of users whose purpose is to trick others with fakery—much like grifters seeking to leverage marks. The Internet is unfortunately full of fake royalty153 and fake humans.154 Even the Brooklyn Bridge grift has been attempted through the Internet.155

While humans impersonating humans has long been a regulated area of conduct in law,156 Internet fakery now pushes beyond the conduct of the “influencers” described in the previous Section and their manipulations of content for self-interested reasons. Today the problem is also “crime-as-a-service,”157 sometimes designed to inflict national security harms and benefit U.S. adversaries. For example, compromised identifying information of unsuspecting humans is often purchased for purposes of online impersonation and spearphishing158 of government employees.159 But increasingly, new tools of technology such as machine learning are also used to generate large numbers of composite human-looking user profiles for automated impersonation.160 Misappropriated real photos might be paired with impersonated identities.161

Legal scholars disagree over whether these impersonations when performed by bots are problematic and the extent to which legal intervention is warranted.162 We find these impersonations in furtherance of fraud and national security harm problematic. Consider recent cases of foreign operatives posing as U.S. persons online, attempting to foment unrest.163 Or consider the conduct of dating website Match.com164 as described in the FTC enforcement action against it.165 According to the FTC complaint, Match.com enticed users to sign up for subscriptions through the use of fake profiles expressing interest in connecting with the particular user, causing them to sign up for six months of “free” services;166 “[t]he FTC alleges consumers often were unaware they would need to comply with additional terms to receive the free six months. . . . [and] were often billed for a six-month subscription . . . .”167 In addition to converting their time and money, fake users such as these leave their human and corporate marks with intangible losses—feelings of annoyance, betrayal, and reputational harms, including the loss of goodwill and potential loss of intellectual property.168 In particular, trademark harms through “brandjacking”169 might arise in this manner,170 a new flavor of the sorts of trademark concerns that led to the passage of the Anticybersquatting Consumer Protection Act (ACPA).171

The challenges facing users in identifying fake information and sources also connect with the third MIST category—sequestering.

3. The Problem of Sequestering

Delivery Man: Fate whispers to the warrior.

Ethan Hunt: A storm is coming.

Delivery Man: And the warrior whispers back.

Ethan Hunt: I am the storm.172

The third tarantula of fakeness is information sequestering173—in other words, self-exacerbating information imbalances that result in both individual and group exploitation (and consequential third-party harms). Again, the law’s concern over information sequestering predates the Internet; it is visible in traditional bodies of law, such as contract law174 and tort.175

a. Individual Sequestering: Informational Exploitation

The field of Internet marketing has developed progressively more sophisticated user-targeting mechanisms; social media companies “match audiences” and data brokers monitor user interactions with a high level of granularity.176 Data aggregation and merger capabilities have advanced through technology such as facial recognition, machine learning, and various sensor-enabled data collection through the Internet of Things177 and the Internet of Bodies.178 Humans are now tracked both on and off the Internet, and streams of increasingly detailed data are merged in ways that are often nonobvious (to users).179 These problems of information asymmetry have become more severe over time180: the content aimed at users has regularly become more tailored—curated in line with what builders of algorithms believe will elicit user engagement.

In other words, perhaps counterintuitively, from a user’s perspective the Internet and related technologies have sometimes facilitated new methods of information impoverishment. They can limit rather than enhance opportunity for ready comparison of information. In the name of convenience, the algorithms of websites and apps often impose curated preferences on users, denying users visibility into the full range of options. Consider recent incidents where online credit card offers were allegedly sequestered based on gender rather than credit score,181 and employment182 and housing offers183 were allegedly sequestered (directly or indirectly) based on race.184 Thus, while the Internet appears to offer a glut of information, tracking technologies distort available information by crafting an opportunity-limited, sequestered Internet experience for selected users. Although it now implicates twenty-first century technologies, the problem of artificial information sequestration itself is not a new concern for law.185 Indeed, contract law has long considered the risks of sequestration as a method of deception during the process of contract formation, potentially rendering formation invalid.186

For example, in the case of election information sequestering, former Cambridge Analytica employees explain that the same election advertisement can be morphed into thousands of variants in order to target particular individuals in ways believed to be more resonant.187 These dynamics became evident when, after congressional pressure, Facebook launched a campaign advertising archive,188 which permitted users to review versions of advertisements run on Facebook by various candidates for office.189 This archive coupled with disclosures from whistleblowers reveal a high level of personalization and A/B testing in honing sometimes seemingly contradictory political messages.190 This extent of user targeting in political messaging not only raises concerns with respect to information sequestering, it also highlights the existence of a toolbox of techniques that are likely effective at exploiting users’ lack of skill in detecting social engineering and potential fakery. These tools also, in turn, feed the creation of increased social sequestration and polarization. In other words, what is lost in this process is the creation of a shared user experience in both the offline and online world—a common frame of reference.

b. Social Sequestering: Alternative Belief Systems

The second category of sequestering involves dynamics of group isolation and the collective version of what Professor Cass Sunstein has referred to as Daily Me191 echo chambers—self-reinforcing groups who create alternative belief systems driven by factually unsupported beliefs. As these information silos develop progressively more elaborate belief systems, their members may disconnect from people outside the group; because of their reliance on fake information pushed through tech-enabled mechanisms of social sequestering, they experience (and cause) various categories of potentially legally problematic content and conduct. In particular, their engagement on the Internet increasingly involves recognizable memes and tropes that leverage social sequestration to further reinforce a crafted group identity. For example, members of Internet conspiracy groups often describe feelings of alienation from family and friends.192 Indeed, these dynamics of social sequestering bear some resemblance to former participant descriptions of informational dynamics of cults193 on the one hand and pyramid/multi-level marketing (MLM) schemes194 on the other. Again, perhaps much like cults and pyramid/MLM schemes, the harms suffered by these groups include increased susceptibility to technology-assisted financial exploitation. For example, some media personalities who arguably leverage social sequestration dynamics have sold fake products online to their audiences, triggering warnings from federal agencies.195

4. The Problem of Toxicity

Martha grew flowers. . . . They were beautiful flowers, and their scent entranced. But, however beautiful, these flowers were also poisonous.196

The fourth tarantula of Internet fakery is toxicity.197 Toxicity on the Internet is not new; it has caused concern for users and scholars alike since the Internet’s earliest days. For example, as early as 1993, authors described virtual “rape” that disrupted Internet communities.198 From the racist memes of 8chan199 to the group harassment of Gamergate200 to posts of nonconsensual pornography201 and child exploitation content,202 the dark side of the Internet causes even the most zealous First Amendment defenders to recognize the negative consequences of unbridled technology-assisted toxicity.203 Toxic environments are not new to law and are considered literally in environmental law, but also both in physical204 and psychological205 terms in employment and labor law.

Yet, the fit between existing law and recourse for technology toxicity is not always ideal.206 Consider the tragic case of Megan Meier, a Missouri teen who committed suicide after orchestrated bullying and verbal abuse by a group of her classmates and an adult, Lori Drew.207 While the Missouri court struggled to apply Missouri law at the time, California prosecutors relied on a novel theory under the Computer Fraud and Abuse Act—the idea that a mere breach of contract can present the basis for a criminal charge of computer intrusion—to bring charges against the defendant.208 Although a jury ultimately convicted the defendant in United States v. Drew,209 the judgment was set aside through a judgment notwithstanding the verdict,210 and scholars have critiqued the theory of the case crafted by prosecutors.211 Regardless of whether scholars agree with the court’s analysis, however, the facts of United States v. Drew illustrate how toxic Internet exchanges can seep into the consciousness of Internet users, with physical harms sometimes following.

Today, Internet toxicity concerns are perhaps most readily visible in the context of the ongoing debate over whether to amend Section 230 of Part I of Subchapter II of Chapter 5 of Title 47, commonly referenced as “Communications Decency Act Section 230” or “CDA 230.”212 Passed in 1996, CDA 230 is often credited with stimulating the rapid growth of Internet business and content of the last two decades.213 However, the trade-off for the buffers of liability protection of CDA 230214 has been in part the proliferation of fake content and conduct that is the subject of this Article.215 While digital copyright issues were provided additional consideration separately in the Digital Millennium Copyright Act (DMCA),216 the language of CDA 230 does not include the DMCA’s updating mechanisms through the Library of Congress. As such, robust calls for217 (and against)218 amendments to CDA 230 have become louder as the toxicity of Internet fakeness has increased.

Although these problems of MIST map to traditional legal corollaries, they are exacerbated by two dynamics relating to data aggregation and leveraging capabilities that are central to (the current version of) the Internet—the arrival of the Internet “long con” and the risk of the rise of what we term the “PSYOP industrial complex.

B. Internet Con(tent and Conduct)

Before the exploits of con artist Frank Abagnale, law enforcement faced the grifts of Ferdinand Demara, also known as the Great Impostor.219 Demara grifted through life since the age of sixteen under numerous identities.220 He joined the Navy,221 faked his suicide,222 became a psychologist,223 performed surgeries as a doctor,224 taught as a college professor,225 founded a college,226 and even studied law.227 Later, he explained that the secret to his success was driven by two beliefs: “One was that in any organization there is always a lot of loose, unused power lying about which can be picked up without alienating anyone. The second rule is, if you want power and want to expand, never encroach on anyone else’s domain; open up new ones.”228

Demara’s two principles for “[e]xpanding into the power vacuum”229 may explain a portion of the dynamics driving technology fakery today. The first dynamic of harnessing “loose” power can be seen in the merger of the state of the art of offline social engineering with the state of the art of technology-assisted data aggregation—a merger that targets “marks” more effectively. We might term this dynamic the arrival of the Internet “long con.” The second dynamic, opening new domains, can be seen in market movement toward a (problematic) merger between private sector information-targeting capabilities and the techniques of psychological operations from military contexts—what we term the “PSYOP industrial complex.”

1. The Internet Long Con

The crook is always attracted to regions of sudden prosperity and quick expansion. There he finds loose and easy money.230

—Edward H. Smith

In 1923, con artist Edward H. Smith recounted the theory and folk history of classical U.S. con artistry in his work, Confessions of a Confidence Man: A Handbook for Suckers.231 Smith explains that “[c]onfidence is a business, and, like all business, changes and conforms to conditions. In fact, con takes rise from the conditions of life about it and adapts itself as does social life.”232 Thus, it should perhaps be no surprise that as the Internet era became the new gold rush, grifting and scamming evolved to include it. For example, traditional romance scams once again caught the attention of the Federal Trade Commission as they successfully shifted online: instead of a con artist actively cultivating a relationship in physical space,233 a version of the story arose with an Internet “sweetheart” (or catfish)234 living in another country without the funds to leave.235

Cons divide into two categories—short and long/big.236 As explained by Luc Sante, the short con is a one-shot interaction in which the mark is tricked out of the money on hand, whereas the long con involves multiple interactions; in the classic long con, “a form of theater . . . staged . . . for an audience of one, who is moreover enlisted as part of the cast,” the victim is sent home to get more money to lose.237 The cons described in Smith’s book unfold across days or weeks, often involving multiple scenes and actors.238 Indeed, Smith describes a well-conceived confidence game as including at least five separate steps prior to the ultimate payoff: foundation work,239 the approach,240 the build-up,241 the payoff or convincer,242 the “hurrah,”243 and sometimes also the in-and-in244 and the corroboration.245

For the purposes of a con artist’s foundation work in particular, ease of access to information about the mark becomes another point of the mark’s vulnerability. As the Federal Bureau of Investigation has explained, the Internet has provided new avenues for con artists; information on social media sites and other Internet sources means that the con artist can craft a plan for generating a false sense of familiarity with the mark through background research.246 Data resale companies offer “background check” information on any person247 of a con artist’s choosing, i.e., extensive reconnaissance and background reading, full of clickstream data, purchases, places of former employment, past addresses, networks of friends, and other information, all of which may prove potentially useful in a con. As explained in an interview with one expert, “Today the con artist’s job is easier than ever because much of the work—gathering information about a potential mark—is done for him by people who voluntarily check in wherever they go.”248 In other words, self-disclosed Internet information offers valuable insights into a potential mark’s preferences and the kind of confidence games that might succeed.249 Additionally, the Internet potentially allows some aspects of the con to be automated as well as personalized.250 Thus, cons become less labor-intensive to run, potentially allowing the simultaneous manipulation of a larger number of marks than may be practical in classic face-to-face cons.

Smith also explains that one reason for the success of cons is that they “play[] an invariable chord in the human make-up—good old earthy greed.”251 He continues: “There are other reasons why con is perennial. It has taken advantage from the beginning of the public foibles, of what is now termed mass psychology.”252 This observation connects us to two other sets of social engineers—marketers and propagandists/PSYOP professionals—and their uses of increasingly granular information-targeting capabilities for goal-oriented exploitation of human psychology.

2. The PSYOP Industrial Complex

Yet, in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite.                                                                                                 

—President Dwight D. Eisenhower253

During World War II, the German Army blanketed its own forces with leaflets, attempting to boost morale among its troops.254 Called Skorpion, the leaflets discussed new “super weapons” and the hope of German victory.255 However, Allied psychological warfare personnel obtained copies of Skorpion and prepared their own fake version of the leaflets—a version with an Allied slant to the information but was otherwise identical.256 The desired effect succeeded: after the Allies airdropped millions of these fake leaflets on German troops in the field, the true Skorpion leaflets were soon discontinued.257 These are the fakery techniques of the military field of psychological operations or PSYOP.258

As explained by the Army Field Manual in Psychological Operations Tactics, Techniques, and Procedures,

PSYOP are planned operations that convey selected information and indicators to foreign target audiences (TAs) to influence their emotions, motives, objective reasoning, and ultimately, the behavior of foreign governments, organizations, groups, and individuals. The purpose of all PSYOP259 is to create in neutral, friendly, or hostile foreign groups the emotions, attitudes, or desired behavior . . . .260

The Army further explains that PSYOP techniques are ever-changing261 and “[p]roven in combat and peacetime.”262 They are “one of the oldest weapons in the arsenal . . . , as well as an important force protector, combat multiplier, and nonlethal weapons system.”263 Indeed, the story of the Skorpion leaflets reminds us, strategic fake information injection has long been a staple tool of military operations. Fakery is not new or specific to the Internet.

Legal scholars usually frame discussions of fake Internet information and “disinformation” under varying (nonmilitary) definitions and without clear engagement with this complex militarized history.264 The history of the field of PSYOP reveals the import of filling in this gap in existing scholarly work.265 Almost all existing legal scholarship analyzes Internet fakery in a compartmentalized way—as something either political on the one hand or economic on the other,266 but this segmentation misframes the problem. The history of PSYOP reveals a clear absence of sectoral segmentation. PSYOP has frequently reached into the private sector for the state of the art of psychology and advertising industry knowledge.267 In other words, the interwoven nature of private sector and public sector in PSYOP and psychographic propaganda efforts has been a historical reality for at least a century in both the United States and elsewhere. Framed in the modern language of information security, the dynamic reflects an early form of what one of us has elsewhere described as the problem of “reciprocal security vulnerability.”268 The information security dynamics of the private sector influence the public sector and vice versa. To wit, a comprehensive legal paradigm for Internet fakery should explicitly recognize and consider both economic and political Internet fakery dynamics within part of a single construct.

An early example of the merger of various private sector psychographic techniques with psychological operations is visible in the writing of Edward L. Bernays, a founder of modern advertising messaging269 (and perhaps the original user of media “influencers”).270 Bernays served during World War I as an integral part of the U.S. Committee on Public Information (CPI) to message the war effort.271 Indeed, he coined the term “engineering of consent,”272 and his seminal work, Propaganda,273 blends insights from Freudian group psychology with behaviorist principles.274 In the book’s opening sentence, channeling his hybrid experiences in both government and advertising settings, Bernays states that “[t]he conscious and intelligent manipulation of the organized habits and opinions of the masses is an important element in democratic society.”275 He highlights the role that messaging reinforcement and repetition play in maximizing efficacy,276 stating that “[p]ropaganda is the executive arm of the invisible government.”277 But, today’s problems of Internet fakery add a new wrinkle. They are rooted not only in this “pull” from the private sector into military PSYOP; they also evidence the arrival of the inverse relationship from military settings into commercial ones.278 A new industry of disinformation-for-hire or “dark PR”279 has arisen, used not only by governments280 but also by companies, celebrities, and others.281 The clients of these dark PR firms are knowingly hiring ex-PSYOP operatives to engage in targeted disinformation operations282 on their behalf.283 But, in addition to the dark PR industry and a broader revolving door of PSYOP-trained personnel between military and civilian contexts,284 there are potentially broader parallel dynamics between military PSYOP techniques and the common audience-targeting techniques of today’s Internet advertising infrastructure in general.285 Thus, today’s Internet fakery includes a new reciprocal pull where government PSYOP techniques appear to be leaking into the private sector.286 In brief, Internet fakery today potentially arises in part from PSYOP techniques beginning to creep into civilian contexts.

Marketing professionals themselves are ethically troubled by this creep. In the words of one such communications professional, the current Internet operations of dark PR firms and related private sector dynamics have constructed an “ecosystem that is just so ripe for professional lying.”287 Further, marketing executives explain, the new norms of extreme microtargeting exploitation conflict with the traditional ethical boundaries of pre-Internet marketing.288 The transgression of this traditional ethical boundary when analyzed in tandem with the principles of PSYOP highlights the scope of a looming problem: the potential rise of a new commercial data exploitation ecosystem with fewer ethical limits that leverages known PSYOP techniques to push Internet fakery at domestic populations for profit and social disruption.

Consider the recent (unsavory) efforts289 of a now-defunct public relations firm that was hired for the purpose of creating racial tension in South African society to distract the public’s attention away from corporate clients’ problematic conduct in the country.290 Or consider the dynamics of the Facebook–Cambridge Analytica291 relationship292 that ultimately resulted in FTC enforcement action293 against both companies in the United States,294 and actions by the U.K. Information Commissioner.295 As articulated by the FTC in its complaints against Cambridge Analytica and Facebook, Cambridge Analytica and its officers “deceived consumers by falsely claiming they did not collect any personally identifiable information from Facebook users who were asked to answer survey questions and share some of their Facebook profile data.”296 According to whistleblowers, Cambridge Analytica’s business model297 involved obtaining data about voters, using this to profile the voters psychologically, and then sending them targeted political ads on social media, including ads intended to discourage some types of voters from voting at all.298 Specifically, the process employed by Cambridge Analytica appears to have merged psychographic techniques of marketers with PSYOP techniques of military professionals.299 Indeed, the company had allegedly employed ex-government operatives, skilled professionals presumably extensively trained in PSYOP techniques and spycraft.300 Again, former social media marketing executives voice concern: they explain that “the personality quiz that Cambridge Analytica created was nothing special”—it reflected Internet psychographic profiling already prevalent in marketing.301 What differed was the lack of self-imposed ethical guardrails that had traditionally limited maximum exploitation of target audiences.302

But perhaps, even more troublingly, adversaries are increasingly comfortable with using the Internet to target civilians on domestic soil.303 While the United States generally deems its own use of psychological operations to be limited to times of war, this constraint is not necessarily shared by other countries,304 some of which have historically been more comfortable with unleashing disinformation campaigns on their own citizens.305 Private sector digital communication tools can now be used to target not only members of the U.S. military in PSYOP operations but also their extended civilian networks of family and acquaintances. This tactic—leveraging civilian communication tools and exploiting civilians as part of PSYOP—is already in use by our adversaries in other parts of the world.306 Indeed, the “active measures” conduct of adversaries’ attempted manipulation of U.S. voters in the 2016 election might be described as a foreign PSYOP on civilians through dark PR and social media.307

Similarly, allegations of U.S. companies attempting to seek out government-grade surveillance technologies to deploy against their own users present concern.308 Ostensibly, such technologies might in theory be perceived to benefit these companies’ advertisers; however, in situations where some of those Internet advertisers are impersonations by foreign influence operatives, the problem of reciprocal security vulnerability arises—U.S. national security concerns cannot be divorced from commercial information security ones.309 Because a portion of our technology fakery challenges arises from potentially foreign sources engaging in PSYOP, Internet fakery erodes national security interests, as well as commercial and civil ones.310 Similarly, whether an Internet fakery operation is financially motivated or done at the behest of a foreign power, ultimately, the deceptive experience of the operation feels the same in real time from a user’s perspective (and exploitative after the fact).311

Notably, the risk of creeping convergence of private sector advertising technology with military PSYOP techniques may bring to mind a warning issued by President Eisenhower in 1961. While the first part of Eisenhower’s warning is familiar, the second part—the equally prescient warning against democracy becoming captive to a technological elite—is less well known. In his farewell address, President Dwight D. Eisenhower cautioned,

[W]e must guard against the acquisition of unwarranted influence, whether sought or unsought, by the military-industrial complex. . . . We must never let the weight of this combination endanger our liberties or democratic processes. . . . Akin to, and largely responsible for the sweeping changes in our industrial-military posture, has been the technological revolution during recent decades. . . . [W]e must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite. . . . [T]his world of ours, ever growing smaller, must avoid becoming a community of dreadful fear and hate, and be, instead, a proud confederation of mutual trust and respect.312

Today, we are at risk of developing the type of technological elite that may have troubled Eisenhower. Channeling Eisenhower’s framing, today we face the risk of private technology companies potentially leveraging PSYOP-trained personnel and techniques to maximally target civilian populations, directly and indirectly giving rise to both national security and economic threats.313 In other words, these dynamics present a risk of what might be called the rise of a PSYOP industrial complex.

In summary, today’s technology fakery presents a muddled scenario with blended commercial and national security implications. The next Section begins to reframe this formidable legal challenge. A blended framework becomes feasible when legal analysis starts from two unifying elements: identifying the intent behind the fakery and assessing the fakery in context.

C. Exploiting Vulnerabilities: Intent and Context

There’s a sucker born every minute.

—likely never said by P.T. Barnum314

The preceding Sections have introduced three key insights. First, prior Sections have identified four categories of Internet fakery—the dynamics of Internet MIST. Second, they have articulated two complicating dynamics—the arrival of the Internet long-con and the risk of the rise of a PSYOP industrial complex. Third, prior Sections have explained that because of the blended nature of civil and national security harms reflected in technology fakery, a robust legal paradigm must be flexible enough to conceptually encompass both. In this Section, we offer a fourth insight: two key elements vary across the most complicated instances of information fakery, regardless of whether the fakery arises from a private threat actor (e.g., a con artist) or a public one (e.g., a military information warfare unit)—the intent to deceive through fakery and the presence of affirmative acts to control the context of the fakery. Let us unpack these two elements of intent and context by analyzing the fakery dynamics of a classic children’s game of Telephone315 and a hypothetical (physical space) “Disinformation Booth.”

Imagine a row of children sitting before you playing Telephone. As each child conveys the secret information to the next in a row, inevitably some scrambled results are likely to occur. Despite requests for the “Operator” and repetition,316 a contorted result often happens due to mishearing, not due to an intentional manipulation. But imagine that a “malicious” six-year-old317 intentionally compromises the integrity of the message. Legally cognizable harm does not result, and an “audit” of the chain of communication will readily reveal the short-statured “attacker” in the scenario. Why? Because the “threat actor” engaging in fakery is operating within the constrained context of a familiar children’s game where everyone understands the context and has agreed to play.

But now compare the game of Telephone to the fakery of a con artist who serially sells fake tollbooths on the Brooklyn Bridge. The intent is deception, and victims experience the legally cognizable harm of financial loss—the cost of the “tollbooth” for the mark and the amount of “tolls” collected by the mark from third parties. The context—a bridge transit—is one that lends credibility to the con. Some bridges do, in fact, have tolls associated with them,318 and the con artist exploits this possibility to his advantage as part of the deception. Uncontroversially, this act of fakery is one likely to lead to legal sanction. None of the “marks” understood the full context, and the perpetrator crafted content and conduct optimized to deceive.

Now consider an example in between—an orchestrated campaign of credible-looking Disinformation Booths in front of Smithsonian museums that intentionally misdirect every other person to the wrong place. The diagnosis of the problem becomes more complex, as does the articulation of a cognizable legal harm.319 Yet, the intent to deceive remains—even if objective determination of that intent and the quantification of the harm become more difficult as a legal matter. The coordinated Smithsonian Disinformation Booth campaign might seem intuitively closer to the fake Brooklyn Bridge tollbooth than the children’s game of Telephone.320 Now imagine that the Smithsonian Disinformation Booth operation did not simply target every other patron. Imagine instead that it is operated by a militant British anti-cat/pro-bird organization, KatzRKillrz, that only targets cat owners for harm.321 As patrons approach the Disinformation Booth, they are vetted for cat ownership using a combination of facial recognition and machine learning that performs real-time background checks for “cat-egories” of suspect purchases such as kitty litter and social media photos with cats. All patrons deemed to be cat sympathizers are then provided disinformation, directed to an area where various “cat-astrophes” await them—a “cat-tack” by an aggressive group of screaming militants, a pelting with used kitty litter, a pickpocket with sticky “paws for the cause,” or a stealthy “animal owner control” vehicle that will attempt to run them over at high speed. Some targets of the Disinformation Booth may exit the experience feeling unharmed—perhaps they recognize the directions as erroneous. Some targets may rely on the information to their detriment and experience feelings of emotional distress and concern for physical safety, e.g., the scenarios with the “cat-tack.” Some targets may incur small financial expense but meaningful discomfort and dignity harms, e.g., the litter-pelting scenarios. Finally, some targets may experience uncontroversially illegal criminal acts and civil harms, such as being mowed down by a vehicle. Yet, all of these harms—some more legally actionable than others—start from the same Disinformation Booth fakery.

As these hypotheticals highlight, two key points of variation are visible across instances of fakery—the degree of intent to deceive/superior knowledge on the part of the faker coupled with concrete acts of context control that lead to harms whose scope and risk are not understood by the target. Indeed, deceptive intent and high context control are hallmarks of the fakery professionals discussed in prior Sections—con artists and PSYOP professionals. But con artists and PSYOP operatives are not the only masters of fakery. The work of a third group of professional fakers also involves intentional deception and high context control—the work of professional illusionists, i.e., magicians. Magicians, like con artists and PSYOP professionals, exploit human information vulnerabilities for a living. Yet, audiences are not harmed by watching magic shows. What differentiates magicians from con artists and PSYOP professionals?

In brief, the key lies in meaningful consent and correctly placed trust. Con artists and PSYOP professionals exploit information vulnerabilities in humans without meaningful consent in context in order to further their own goals, often through abuses of trust. In contrast, magicians exploit informational vulnerabilities in furtherance of a shared enterprise of entertainment, crafting a context where risk of harm is consciously limited and without abuse of trust.

1. Nonconsensual Exploitation of Informational Vulnerability

Propaganda ceases where simple dialogue begins.

—Jacques Ellul322

As described previously, early stages in both long-con and PSYOP operations involve intentionally curating a target audience for exploitation without their meaningful consent in context.323 This intent manifests itself in at least two concrete ways—first, in selection and targeting of the audience for maximum exploitation and, second, the nature and extent of context control measures in messaging.324 In PSYOP, the step of identifying target audiences and mapping the audience to appropriate messaging has always been a data-intensive enterprise, incorporating both human intelligence from other information operations and insights from psychographic data.325 In particular, the Army PSYOP Field Manual includes a discussion of “[a]dvertising and [s]ocial [m]arketing” techniques as part of psychological operations326 and the specifics of the context of the operation.327 Much like an Internet long-con artist or an aggressive Internet marketer might, the Field Manual advises seeking out external databases of information from private-sector sources such as “polling companies” to assist with this targeting enterprise.328

In crafting context control during messaging, just as a con artist might intentionally fabricate false identities or fake stories to reel in a target, psychological operations techniques also seek to create fake “stories” to control the behavior and beliefs of targets about the context. In particular, PSYOP leverages three categories of fake information—misinformation,329 propaganda, and disinformation.330 In contrast to misinformation, propaganda and disinformation display hallmarks of intentionality.331 In other words, the distinctions among misinformation, propaganda, and disinformation turn on the objectives of the operation and, in particular, the intent of the information disseminator.

Returning quickly to the problems of Internet MIST from Part I, we see that intent similarly plays a key role in each of the four problematic dynamics. Manipulation and impersonation include an intent to deceive. Sequestration requires intentional acts to limit information access. Toxicity may involve the intent to harass or exploit targets.

While Bernays’s theory332 offered one possible framing for the influence of private sector branding on government communications, the work of lawyer and philosopher of technology Jacques Ellul may offer a useful framework for analysis of intent. Specifically, framed in the language of information security, Ellul’s work offers potential insights about the intent of fakery professionals during their nonconsensual information vulnerability exploitation—a view through attackers’ eyes. In his own seminal work, a book which bears the same title as Bernays’s book, Propaganda, Ellul explains that information manipulation or “[p]ropaganda is called upon to solve problems created by technology, to play on maladjustments, and to integrate the individual into a technological world.”333

In prior legal scholarship, Ellul’s work has been applied to questions of law and medicine,334 lobbying,335 religion in the workplace,336 independent judgment,337 technology theory,338 discourse,339 patentability,340 and lawyer conscience.341 Ellul’s work has been aggressively critiqued by legal scholars,342 philosophers,343 and media theorists for, among other reasons, its overdeterminism and superficial understanding of audience response to messaging.344 However, Ellul’s perspective when viewed through the lens of information security as a discussion of the subjective worldview held by attackers—fakery professionals such as con artists and PSYOP experts—may, nevertheless, remain useful. Thus, we offer Ellul’s insights on propaganda with a limited purpose as a stepping-stone toward an objective framework: Ellul’s work potentially articulates the goals of fakery creation and dissemination strategy through the eyes of an information attacker engaged in fakery.345

Specifically, Ellul explains that a “news event may be a real fact, existing objectively, or it may be only an item of information, the dissemination of a supposed fact.”346 “What makes it news is its dissemination, not its objective reality.”347 In other words, Ellul is highlighting the amplification348 of fake information played by technology and its relationship to misinformation and disinformation—political349 or otherwise. Ellul then argues that manipulation through information can act as a unifying force to generate potentially destructive group identity around the crafted information.350 Speaking in words that may resonate with modern conversations about viral spread of fake news and political disinformation, Ellul explains that “[w]hat is needed, then, is continuous agitation produced artificially even when nothing in the events of the day justifies or arouses excitement,”351—i.e., attention control tactics.352 This perspective rings consonant with the perspective of the Army PSYOP Field Manual and the operational choices of Russia’s Information Research Agency in their informational interventions in the 2016 U.S. presidential election.353 Indeed, the Internet MIST dynamics of manipulation, impersonation, sequestration, and toxicity combine to create a context akin to the one described by Ellul as ripe for exploitation by attackers: the Internet presents a technology context characterized by efforts to “create a sense of urgency”354 and “FoMO,”355 extreme amplification,356 and ephemerality of memory357 (despite existing archiving).358

For Ellul, master propagandists exploit faux urgency created by technology to generate what Ellul calls the “current-events man” who is a “ready target for propaganda.”359 In particular, for Ellul the context of ephemerality created by technology blends with human cognitive processing limitations and a tendency toward using emotion360 (over thought) to create fertile territory for propagandists.361 He explains that propagandists play on (perceived) humans’ cognitive processing deficits with respect to holding conflicting ideas at the same time and identification of inconsistencies,362 a problem he claims that can be addressed in part through dialogue.363 He states, “To be effective, propaganda must constantly short-circuit all thought and decision. It must operate on the individual at the level of the unconscious.”364 Again, mapping onto our prior discussion of con artists’ preference for more sophisticated marks as described by Smith,365 Ellul also highlights that it is the overconfidence of more sophisticated target audiences that renders them particularly vulnerable to attackers: they assume wrongly that they cannot be manipulated.366

Now, let us contrast these dynamics with the work of professional illusionists—magicians. Like con artists and PSYOP operatives, magicians exploit human cognitive limitations toward a desired act of fakery.367 However, unlike con artists and PSYOP professionals who exploit human cognitive vulnerabilities for personal gain, magicians obtain consent and craft a context that avoids audience harm and abuses of trust. Their intent is not to deceive through unfair surprise and fraud, but to entertain in a shared enterprise. These key differentiating elements—intent and context—will form the foundations for our later conversation with respect to legal interventions in Part III.

2. Consensual Exploitation of Informational Vulnerability

Any sufficiently advanced technology is indistinguishable from magic.

—Arthur C. Clarke368

In England in 1903, with flair befitting a performance artist, the famous magician and inventor Nevil Maskelyne demonstrated that the wireless telegraph was, in some ways, a fake—it was not as secure as its creator, Guglielmo Marconi, had claimed it to be.369 In what is commonly believed in the security community to be the first documented case of the exploitation of a real-time communications technology vulnerability,370 Maskelyne preceded Marconi’s transmitted message with the words “rats, rats, rats, rats,” followed by a mocking limerick371 and cheeky Shakespearean references during the on-stage demonstration.372 By exploiting a security flaw in the Marconi wireless telegraph to corrupt the integrity of Marconi’s allegedly interference-proof transmission,373 Maskelyne used one act of fakery to expose another: his exploit revealed that Marconi had attempted to trick the public into misplacing its trust in his device. Marconi had misrepresented374 the security of his technology and its information transmission.375

Maskelyne’s profession as a magician may appear to be a historical anachronism for the finder of a security flaw. But, in reality, it is fitting that a magician exposed an abuse of audience trust and a harmful security misrepresentation: magicians are themselves professional fakers. They are performance artists who exploit the limits of human perception and deploy “fakes”—a term of art in magic376—for a willing audience in a controlled setting. As the following Sections explain, illusionists offer a useful explanatory foil to con artists and PSYOP professionals in our discussion of fakery.

a. Abracadabra—A Shared Enterprise

In a seminal article on the manipulation of attention and awareness in stage magic, renowned magicians Teller and James Randy partnered with neuroscientist coauthors to explain the techniques used by magicians to deceive.377 In their pathbreaking interdisciplinary work, the authors argue that magicians craft cognitive and visual illusions to capitalize on shortcomings in humans’ underlying neural mechanisms.378 Teller explains it in this way: “Every time you perform a magic trick, you’re engaging in experimental psychology. . . . I’ve exploited the efficiencies of your mind”379 in much the same way that “[t]he pickpocket has found a weakness in the way we perceive motion.”380

The authors explain that magic exists on one side of an illusion spectrum, while on the other side sit pickpockets, scammers, and con artists—professional thieves and grifters who leverage many of the same techniques as magicians.381 In short, the authors explain that although magicians exploit and attack human cognitive weaknesses,382 they do so for art, not crime.383 The goal of the magician is to generate a harmless illusion with the knowledge of the spectators that it is, in fact, an illusion.384 In contrast, the con artist or PSYOP professional generates a false sense of trust for purposes of exploitation of the target. Thus, again, herein lies the first key difference between magic when compared to con artistry and PSYOP: a shared, consensual enterprise between the faker and the targets.

The second key difference rests in the context of the deception. In magic, the “mark” is deceived (with consent) in an understood environment—the theater. The mark has full knowledge of the boundaries of the environment, understanding that the space of the theater is the playing field of the magician. Both faker and target know that the power to control the context rests with the magician. Thus, the audience expects to be deceived while in the theater, and the audience expects that the deception will end upon exiting the theater unharmed. In the context of con artistry or PSYOP, marks have no knowledge that they have entered an environment controlled by a faker with superior information. Any consent on the part of the mark occurs with only the benefit of limited or manipulated information, rendering any “meeting of the minds” functionally weak and potentially legally insufficient.385 As additional layers of technology mediate the exchange, the “mark” faces yet another diminishment of control and information imbalance—a context that may appear unfettered to the mark but, in reality, is tightly sequestered by the faker. Framed another way, audiences extend trust to a particular faker in a particular context—the key is whether the trust is misplaced.

b. Illusions of Trust

As explained succinctly by Teller, Randi, and their coauthors, a successful illusion, much like a successful con or a successful PSYOP, is predicated on the audience’s trusting their own senses and perceptions, even when those perceptions are disconnected from the reality of the situation.386 In other words, magic combines multiple principles of human trust and perception to misdirect the audience in ways controlled by the magician.387

Specifically, according to Teller, Randi, and their coauthors, in illusionism, the illusion of trust is generated in the audience through triggering two forms of brain activation that neuroscience research believes to be associated with feelings of trust: activity associated with predictive ability (and feelings of conditional trust) and activity associated with social attachment (and feelings of unconditional trust).388 Thus, we might begin to map the types of fakery to these types of trust—i.e., fakery aimed at generating a false sense of predictability and fakery aimed at generating a false sense of social attachment.389 Magic tends to engage the first, while con artists and PSYOP professionals tend to engage the second.

In magic, the trust is conditional and limited. Fakery is done with knowledge of the marks, and members of the audience voluntarily suspend disbelief for a predetermined duration of time to jointly craft the show with the magician. Magicians engage in accurate professional self-labeling and accurate services description.390 Most importantly, the trust is conditional because the fakery ends when the audience leaves the theater. The fakery in magic has a hard stop. Simultaneously, any unconditional trust generated by the illusionist during the show is not abused—no unfair surprise or unexpected harms happen to the audience due to the conduct of the magician during the illusion. In contrast, a long con or a PSYOP operation might continue indefinitely harming the minds and lives of unsuspecting marks.

In summary, while magicians may exploit cognitive limitations to create trust, that trust is, ultimately, not misplaced. Thus, magicians—but not con artists or deployers of PSYOP—subscribe to the ethical principle shared by many professions of “do no harm.” Building on these distinctions, the problems of Internet fakery can be framed as problems of misplaced trust in untrustworthy people and information. In other words, they are information security problems of social engineering and untrustworthiness.

The next Part explains how this concept of untrustworthiness can be connected to the computer science and philosophy understanding of “trustworthiness” to advance the fakery conversation in law.

II. Reconstructing Trustworthiness

[John Romulus Brinkley’s] name stand[s] out in bold relief among the great medical luminaries of this generation.

—John Romulus Brinkley391

In Buster Keaton’s 1922 film Cops, Buster leads a tired horse into a building to see a “goat gland specialist” named Dr. Smith; shortly thereafter, the horse exits bucking and energized.392 That same year, a “Dr.” John Romulus Brinkley in Kansas created a movie called Rejuvenation Through Gland Transplantation, where he chronicled his allegedly miraculously successful goat393 testicle operations394 that he claimed enabled thousands of men to overcome fertility issues.395 For a mere rate of $750 per surgery,396 Dr. Brinkley performed thousands of surgeries on patients who trusted him with their sensitive fertility issues, including a celebrity clientele that may or may not have included Huey Long, Williams Jennings Bryant, Rudolf Valentino, and President Wilson.397 In 1923, Brinkley became a pioneer of radio by launching what was likely the most powerful radio station in the world to advertise his fertility services.398 He used his station to “write” on-air prescriptions and then launched a network of druggists that sold his proprietary drugs.399 In response, the American Medical Association and Federal Radio Commission, the predecessor to the FCC, targeted Brinkley for enforcement,400 shutting down his radio station and ending his practice.401 In reaction, Brinkley ran as a write-in candidate for governor of Kansas in 1930 and later pioneered the use of the sound truck for election messaging amplification.402 Ultimately, Brinkley’s own lawsuits resulted in his legal denouncement as a total fraud. In 1939, he sued an editor of the American Medical Association’s journal for libel after the journal labeled him a medical charlatan, and, it was during that litigation that he was exposed to be, in fact, not even a licensed doctor.403 Yet, despite this objective lack of qualifications404 and medically questionable surgeries,405 some of his patients swore by his techniques,406 and he enjoyed tremendous electoral support.407 People subjectively trusted him, despite ample evidence that he was a charlatan and a con artist, objectively unworthy of their trust.

Today, instead of Brinkley’s goat testicle fertility surgeries and radio prescriptions,408 medical charlatans on the Internet promote miracle cures for autism409 and COVID-19.410 “Disinformation cult[s]”411 push conspiracy theories412 that have inspired manipulable marks to cause physical harm to themselves and to misinform others.413 Indeed, the severity of current fakery in health information on the Internet has resulted in the World Health Organization (WHO) launching a new field of interdisciplinary study—the field of “infodemiology.”414

The Section that follows examines questions of trust and trustworthiness, connecting them to the insights introduced in Part I—the importance of examining intent/knowledge and context. We begin by defining computer science, social science, and philosophy meanings of trust and trustworthiness.

A. “Trusted” Versus “Trustworthy”

The repetition of a catchword can hold analysis in fetters for fifty years and more.

—Justice Benjamin N. Cardozo415

Over a decade before Facebook’s FTC consent decree violations and the Cambridge Analytica abuses,416 the startup that would become Facebook was born in a dorm room at Harvard.417 In a now–infamous instant message exchange in 2004, then–college student Mark Zuckerberg was asked by an acquaintance why over four thousand Harvard and other students had voluntarily shared personally identifiable information with his new social network despite no warranties about future potential (mis)uses.418 Zuckerberg’s reply was curt: “THEY ‘trust me’ . . . dumb fucks.”419 Putting aside Zuckerberg’s pejorative analysis420 of user intelligence (and his potentially legally questionable conduct),421 Zuckerberg’s brutal comment succinctly and accurately identified the core of all Internet fakery issues today: misplaced trust.

Legal scholars and other authors have long considered questions of trust and trustworthiness.422 The legal scholarship reflects debate in the context of reputation,423 network neutrality,424 electronic commerce,425 online interactions generally,426 competition,427 social commerce,428 contextual factors,429 risk management,430 privacy,431 trademark law,432 consumer autonomy,433 criminal law,434 form contracting,435 campaign finance,436 the Internet of Things,437 self-help,438 reputation,439 personhood,440 cryptocurrency,441 corporate boards442 and corporate law more generally,443 expert witness testimony,444 exceptions to the hearsay rule,445 confidence in the judiciary,446 medical treatment,447 the Confrontation Clause,448 human subjects research,449 community norms,450 and the Sixth Amendment generally.451 Yet, in all but one of these law review articles,452 the framings of “trustworthiness” adopted by legal scholars conflict with the understanding of the concept from computer science. No prior legal scholarship rigorously bridges computing and non-computing discussion of trustworthiness in the context of technology fakery. Ergo, let us embark on precisely such an analysis.

1. Trustworthiness and Computing

In January 2002, Bill Gates sent an email to all Microsoft employees453 launching Microsoft’s Trustworthy Computing Initiative.454 In hindsight, this initiative would become a critical transformational milestone in both the life of Microsoft Corporation and the broader technology industry.455 Gates’s email announced that “Trustworthy Computing is the highest priority for all the work we are doing. We must lead the industry to a whole new level of Trustworthiness in computing. . . . Trustworthy Computing is computing that is as available, reliable and secure as electricity, water services and telephony.”456 In other words, Gates’s email highlighted a key computing term of art as the lodestar for the company’s future: trustworthiness.

Computer science draws a distinction between the concepts of “trusted” and “trustworthy.” In computer science, the word “trusted” refers to technical trust—meaning which components of the system rely on other components of the system as a technical matter—reasonably or unreasonably. 457 In other words, saying a component or system is trusted merely signals a dependency458 as a descriptive matter; it is not an attestation of error-free functionality. It is merely a technical description, not an assertion of code or system quality. As explained by Professor Ross Anderson, “[T]he ‘trusted computing base’ is the set of all hardware, software and procedural components that enforce the security policy” and “a trusted component is one which can break security.”459 No normative question of appropriateness of trust is included in this framing; it merely describes the functional relationship of components with respect to potential compromise.460 In other words, in computing, a component or system is “trusted” when, in fact, other parts of the system rely on it, even potentially to their detriment. Indeed, the adjective is sometimes used for processes that are not subjected to checks for security or correct operation, or applications that are given higher levels of system access,461 because it is simply assumed462 (potentially incorrectly) that they will behave as they should.463 Thus, a trusted component or system may also be an abusive one unworthy of trust, in the sense understood in fields outside of computer science.464 In other words, trusted systems might not be trustworthy systems.

In contrast to the term “trusted,” the use of the word “trustworthy” signals a promise that a user’s trust is correctly placed—that the components or system reflect objectively testable properties of proper functionality in context.465 The National Institute of Standards and Technology (NIST) has historically defined trustworthy systems as ones that are “capable of operating within defined levels of risk despite the environmental disruptions, human errors, structural failures, and purposeful attacks that are expected to occur in its environment of operation.”466 In other words, trustworthiness means that a component or system can be expected to successfully perform in particular, stipulated contexts. It is a testable assertion. Similarly, the description of a system as “trusted” in the technical sense, or as “trustworthy,” usually describes the system’s properties through the eyes of the system’s builders based on their own in-house testing.467 Thus, assertions of trustworthiness should be understood as a representation and warranty of quality in functionality in the contract law sense. Nevertheless, the description of the builder does not necessarily accurately describe the system as an objective matter, such as through the eyes of users or of a neutral auditor; the need for independent technical validation of trustworthiness assertions remains.

In summary, deployed trustworthy systems are trusted468 but not all trusted systems are trustworthy in a technical sense. Thus, whether a technologist describes a system as merely “trusted” or as “trustworthy” is an important distinction, and it is one that may be missed by policymakers.469 Yet, it conceptually sits at the heart of the inquiries they are conducting.470

Now let us merge these technical discussions of trust and trustworthiness with social and philosophical ones about the human experience of trustworthiness.

2. Trustworthiness and Society

When asking the question of how human beings trust,471 philosophers of trust have synthesized the subject into three targets or types of trust—trust in objects, trust in people, and trust in institutions.472 In a meta-analysis of the academic literatures of trust,473 Professor Katherine Hawley explains that three ideas consistently span disciplines in analyses of trust.474 First, trust in people and institutions requires a richer analysis than the identification of trust in objects.475 Second, trust in people and trust in institutions involve expectations of good intentions and competence in context; either one standing alone is not enough to support trust.476 In other words, the trust literature reinforces the two key elements identified by our analysis of con artistry—intent and context. Third, the philosophy of trust adds a critical new consideration: the existence of distrust as an independent construct.477 Thus, explains Hawley, distrust does not equal an absence of trust; instead it exists as an independent, equally potent opposing force.478 The absence of trust is more accurately considered a state of uncertainty that could fluctuate either in favor of trust or in favor of distrust.479

In brief, Professor Hawley explains that trust equals an expectation of honesty and knowledge in context, while distrust equals the situation where honesty or knowledge is doubted.480 She explains that the analysis, consequently, turns on an assessment of the commitment of the party asking for trust, as well as the extent of their knowledge in context; in other words, an assessment of their trustworthiness.481 Professor Hawley connects trust to the state of trustworthiness by explaining that preemptively granting trust does not always result in trustworthiness and that humans can exploit the vulnerability of preemptive trust extension.482

Indeed, Professor Hawley reminds us that while some people can be trustworthy in one context, they may be inherently untrustworthy in another.483 Assessments of trustworthiness depend in part on the extent of possible damage and its correctability. Trust is a benefit only if properly directed; if inappropriately directed it generates the impression that the “truster” is gullible, naïve, or irresponsible. Thus, a personal sanction of sorts occurs when trust is misplaced—a “penalty” for a failed determination of trustworthiness.484 Here, let us briefly reconnect this discussion with the discussion from prior Sections. Using words that might be applied to the dynamics of art forgeries485 and also to the dynamics of modern Internet fakery,486 confidence artist Edward Smith explained that successful manipulation of a “mark” turns on tricking the mark into two types of incorrect trustworthiness determinations. The first involves exploiting the trust of the mark directly; the second involves the mark’s functional self-exploitation as they struggle to project their own trustworthiness to third parties. Smith explained that the need of marks to preserve their own trusted roles within their social and professional networks487 makes them in part vulnerable to con artistry.488

In summary, the philosophy and social science literatures of trust connect to the computer science literatures of trust through the concept of trustworthiness. Literature from both computing and other fields highlights that successful relationships—whether technical or human—involve correct determinations of trustworthiness, i.e., a determination of good intentions and relevant knowledge to succeed in a particular context. Using the insight that trust and distrust are opposite ends of a sliding scale, we argue that trustworthiness and untrustworthiness similarly sit on opposite ends of a scale. Thus, untrustworthiness—the opposite condition to trustworthiness—refers to the condition where trust is misplaced in code and/or other people.

Conceptually, numerous areas of law rely on the idea that trustworthiness can be assumed by default in the absence of a warning to the contrary. In tort, product liability presumes that goods are fit for purpose and trustworthy for use as the manufacturer has described.489 Contract law presumes that a party who formally memorializes a set of representations and warranties should be presumed trustworthy until demonstrated otherwise.490 Regimes such as securities regulation491 and medical device regulation rely on voluntary disclosures which by default are presumed trustworthy until proven otherwise.492 As such, the law’s focus on sanction implicitly already focuses on situations where untrustworthiness has given rise to harm—a physical harm, a material breach, a material nondisclosure, and the like. So too in the context of technology fakery, the law should primarily focus on articulating recourse for situations arising from problems of untrustworthiness.

But before we embark upon crafting a framework focused on untrustworthiness and fakery, let us first briefly understand the boundaries of any such legal approaches—the constraints and protections of the First Amendment in the context of fakery.

B. Freedom from Untrustworthy Content and Conduct

Father, I cannot tell a lie.

—maybe said by George Washington493

Prior Sections of this Article have articulated the elements of intent and context as the dispositive variables in determining untrustworthiness. This Section asks, “To what extent can untrustworthiness of Internet fakery be regulated?” What baseline constitutional constraints exist on approaches that aim to minimize content and conduct untrustworthiness?

The Commerce Clause specifically empowers Congress to regulate commerce between the states,494 and the Supreme Court has expansively understood this “regulability” to include the Internet.495 Ergo, perhaps the most obvious constraint on congressional and state regulatory power over Internet fakery exists through the First Amendment. To date, two main categories of regulatory approaches have been proposed—false content prohibitions496 and various types of content-neutral approaches. These proposed or possible content-neutral approaches have included various forms of mandatory disclosure and labeling requirements,497 restrictions on amplification of certain content,498 altered moderation liability in various forms,499 information reuse restrictions,500 and personalization restrictions.501 First Amendment concerns related to each of the approaches will be introduced in this Section but discussed in greater detail in the companion article to this piece, Superspreaders.502

1. False Content Prohibitions in Context

Perhaps the most obvious First Amendment question involves the extent to which false speech is regulable in Internet contexts. A common misunderstanding exists in the popular press503 and perhaps even in some corners of the legal academy that United States v. Alvarez in essence created a First Amendment “right to lie.”504 It did not.

In Alvarez, the Court explained that a generalized prohibition on all false speech would present a new category of restriction505 that is content-based and without adequate historical basis506 and it cautioned against “free-wheeling” approaches507 and tests involving relative balancing of social costs and benefits.508 But, the Court also explained that “there are instances in which the falsity of speech bears upon whether it is protected. Some false speech may be prohibited even if analogous true speech could not be. This opinion does not imply that any of these targeted prohibitions are somehow vulnerable.”509 The Court clarified that although some false speech in public debate510 is inevitable,511 false statements can present systemic risk in key situations where core governmental or market functions are impaired. Ergo, it is not falsity alone that gives rise to the restriction512 in, for example, situations of fraud, impersonation,513 perjury,514 false statements to government officials,515 or defamation.516 Citing to Virginia State Board of Pharmacy v. Virginia Citizens Consumer Council, Inc., the Court continued, “Where false claims are made to effect a fraud or secure moneys or other valuable considerations, say offers of employment, it is well established that the Government may restrict speech without affronting the First Amendment.”517

Turning to the particular facts of the case, where an individual was prosecuted under the Stolen Valor Act for lying about a military honor at a public meeting, the Court can be read to perform a three-part analysis. First, the Court examined the legal nature of the false information.518 In analyzing the legal nature of the defendant’s fakery, the Court noted that the Stolen Valor Act lacked nuance: it failed to differentiate between public statements of untrustworthy information and those made in private or for creative purposes.519

Second, the Court performed an implicit assessment of the intent and knowledge of the disseminator and an explicit assessment of the nature of the alleged harm.520 The Court explained that for the defendant, “[l]ying was his habit,”521 and the Court viewed the defendant’s speech as merely an attempt at social self-aggrandizement.522 The necessity523 of a falsity prohibition524 and a causal link to a recognized category of harm were not sufficiently demonstrated.525 Indeed, Professor Rodney Smolla explains that in Alvarez, the particular drafting was dispositive, resulting in a situation where “the law could not survive ‘exacting’ scrutiny.”526 He notes that the “plurality heavily emphasized that [t]he First Amendment requires that the Government’s chosen restriction on the speech at issue be ‘actually necessary’ to achieve its interest.”527 Smolla reiterates that the Court strongly emphasized a clear articulation of the nature of harm resulting from the falsity: “[T]he plurality stated that ‘[t]here must be a direct causal link between the restriction imposed and the injury to be prevented.’”528 In other words, traditional categories of legal harm can offer a basis for falsity regulation. Further explained by Professor Martin Redish and Julio Pereyra, “[F]raud can justify government intervention, even after the Supreme Court’s Alvarez decision.”529 The core inquiry, Professor Redish and Kyle Voils explain, is based “on the nature and severity of the harm, not on the commercial nature of the expression.”530 In a related vein, Professor Eugene Volokh has argued that

being duped into hiring someone, or into opening your property to someone, based on affirmative lies would indeed count as a specific harm, even in the absence of physical property damage caused by the employee or visitor. . . .

. . . .

. . . And a public-spirited motive for getting a salary under false pretenses, or getting access to property under false pretenses, does not, I think, give a First Amendment immunity to the fraud.531

Finally, third, the Court examined the broader context of the falsity.532 In particular, in analyzing the context of the prohibition on falsity, the Court found that the government’s assertions of compelling interest533 were not supported by its own prior acts. Specifically, the Court pointed out that rudimentary avenues of counterspeech534 were not employed, in particular the creation of a website that would allow efficient confirmation of information, facilitating counterspeech.535

Thus, a close reading of Alvarez reveals that a constrained interpretation of the case is warranted. As succinctly stated by Professor Martin Redish, “[T]he First Amendment should have only a limited impact on regulation of false speech.”536 In summary, when properly drafted, falsity prohibitions can avoid duplicating the Alvarez pitfalls and potentially survive First Amendment scrutiny.

2. Content-Neutral Approaches

In addition to falsity prohibitions, however, other types of approaches unrelated to the falsity of particular content have also been proposed—additional disclosure and content labeling requirements, amplification restrictions, moderation liability, information reuse limitations, and personalization restrictions.537 As one of us has already explained elsewhere,538 content-neutral regulatory approaches to technology that focus on code functionality and information security harms—i.e., harms unrelated to the ideas presented by the content—can survive First Amendment scrutiny.539

a. Disclosure and Labeling Requirements

In lieu of outright bans on particular types of content, a fakery law might adopt a strategy of requiring additional clarifying disclosures—either through a separate clarifying filing or through a labeling requirement for the relevant content. Such disclosures would in many cases survive judicial scrutiny. Even in the sensitive context of mandatory disclosure and labeling requirements for political content, Citizens United v. Federal Election Commission signals that multiple types of disclosure and labeling approaches are viable.540 In Citizens United, the Court expressly reaffirmed that disclaimers and disclosure requirements on political speech can survive First Amendment challenge in Internet contexts, finding “no constitutional impediment to the application of BCRA’s disclaimer and disclosure requirements to a movie broadcast via video-on-demand” and stating that “there has been no showing that, as applied in this case, these requirements would impose a chill on speech or expression.”541 Thus, the Court specifically upheld disclosure requirements as applied to Citizens United’s Internet communications.542 Hence, even though they may burden the ability to speak,543 disclosure requirements can survive scrutiny provided that they can be justified by a sufficient governmental interest. In other words, as explained by noted First Amendment scholars, even identity disclosure requirements544 would likely pass First Amendment scrutiny545 in particular cases.546 The primary concern with respect to their aggressive use is a historical and policy one: some types of labeling may hinder a key tool of furthering political discourse wielded by the Founding Generation—pseudonymous and anonymized speech.547 In brief, in the context of Internet fakery regulation, some types of disclosure and labeling approaches are likely to pass First Amendment review.

b. Amplification Restrictions

Another possible approach to limiting the impact of technology fakery involves restrictions on certain content-amplification conduct designed to create “noise.” In Internet spaces, amplification conduct often intentionally drowns out particular content through amplifying other content, submerging the first from users’ view and disrupting the public’s quiet enjoyment of the Internet.548 Indeed, at certain volumes of amplification, particularly when automated, the amplifying entity’s behavior can often seem akin to the behavior of an attacker in a distributed denial of service attack. In other words, its character is more in line with conduct; content-neutral legal restrictions on amplification conduct do not necessarily offend the First Amendment.

In Ward v. Rock Against Racism, the Supreme Court explained that where the principal justification for governmental amplification guidelines arises from a desire to control noise levels in order to retain the character of a space and to avoid undue intrusion into areas of quiet enjoyment, the justification “satisfies the requirement that time, place, or manner regulations be content neutral.”549 The Court highlighted that the requirement of narrow tailoring is satisfied “so long as the . . . regulation promotes a substantial government interest that would be achieved less effectively absent the regulation” and “[s]o long as the means chosen are not substantially broader than necessary to achieve the government’s interest. . . . ‘The validity of [time, place, or manner] regulations does not turn on a judge’s agreement with the responsible decisionmaker concerning the most appropriate method for promoting significant government interests.’”550 To wit, limiting amplification conduct in particular technology contexts does not necessarily close a channel of communication; it can merely limit the volume to preserve quiet enjoyment of a particular space.551

While some legal scholars have characterized computational amplification tools such as bots as a “medium of speech” potentially offering “a new, unfolding form of expression,”552 such technology exceptionalism warrants hefty skepticism.553 Technology tools such as bots are more akin to remixing soundboards and sound trucks than to a writer’s new novel. The basis for their regulation arises not from the content of what is being amplified but from the act of amplification itself—the intrusive conduct of using the amplifier in particular ways, regardless of message. As such, legal restrictions on (humans’) conduct using bots and other technology amplification tools are likely to fall within the parameters of existing Supreme Court case law on amplification conduct. Ergo, just as some falsity prohibitions and disclosure/labeling requirements will survive First Amendment scrutiny, so too will properly crafted Internet amplification restrictions likely pass muster.

c. Moderation Liability

At least two forms of moderation liability currently coexist—one driven by an Internet-first framing arising from the Communications Decency Act,554 as qualified by the Supreme Court in Reno v. ACLU,555 and a second, driven by traditional concepts of publishers’ duty of care from pre-Internet times as articulated by the Court in New York Times v. Sullivan.556 Scholarly opinions diverge on the law’s application to technology contexts: Professor Cass Sunstein has argued that “New York Times Co. v. Sullivan badly overshot the mark and that it is ill-suited to the current era.”557 Meanwhile, Professor David Logan has argued that the New York Times v. Sullivan standard,558 coupled with technology, has changed the nature of the public square.559 Thus, argues Logan, the Court’s approach should evolve in order to advance discussions regarding Internet speech and moderation.560 The issues of moderation add a new dimension to the fakery inquiry, requiring their own in-depth analysis. They fall outside the scope of this Article and will be addressed in future work.561

d. Information Reuse Restrictions

As explained in Section II.B, a portion of the Internet fakery dynamics are driven by information collection and reuse in targeted ways aimed at particular populations. As a consequence, one possible way to mitigate Internet fakery dynamics might be through information reuse restrictions. While such restrictions may seem initially problematic, upon closer examination of Sorrell v. IMS Health Inc.,562 we find that the Supreme Court has offered guidance as to what type of information reuse restriction may survive First Amendment scrutiny. Notably, in Sorrell, the Supreme Court validated privacy interests as a legitimate state interest for First Amendment purposes.563 As Professor Anupam Chander and Uyên P. Lê correctly point out, Sorrell does not mean “the “death of privacy.”564 Sorrell similarly does not mean the death of Internet fakery regulation. As one of us has argued elsewhere,565 Sorrell does not pose a meaningful obstacle to (correctly drafted) information privacy statutes, provided they are neutral as to the commercial or noncommercial identity of the restricted entities—the fatal flaw of the statute at issue in Sorrell.566

e. Personalization Restrictions

Thus, finally, and perhaps most surprisingly, the act of personalization of content itself can potentially be statutorily and regulatorily restricted in some cases. These types of personalization and suitability requirements have already faced First Amendment challenge in Lowe v. SEC.567 In Lowe, the Supreme Court began to set forth the contours of personalization limitations that pass constitutional muster. Specifically, the Court pointed to the intent of the creator of an Internet newsletter (that was not personalized), exempting him from registration requirements as an investment advisor on First Amendment grounds.568 These insights from Lowe, particularly in combination with the recognition of privacy interests in Sorrell, signal that a statutory approach limiting personalization conduct in content creation may survive First Amendment scrutiny. Thus, personalization conduct restrictions may survive First Amendment scrutiny on both free speech and press freedom grounds where it is necessary to protect market integrity, for fraud prevention, or to further other legitimate state interests,569 particularly where the dissemination arises from a statutorily created special status for the speaker/publisher. These key First Amendment insights inform the regulatory framework for Internet fakery next introduced in Part III.

III. Regulating Untrustworthiness: The NICE Framework

[C]ode is followed by commentary and commentary by revision, and thus the task is never done.

—Justice Benjamin N. Cardozo570

Part I introduced the problems of Internet MIST and two new dynamics—the arrival of the Internet long con and the risk of a PSYOP industrial complex. Reframing these challenges into a single concept, Part II introduced computer science and philosophical notions of trustworthiness and its inverse, untrustworthiness. It then presented the First Amendment constraints on any derivative legal framework of technology untrustworthiness. Informed by these concepts, this Part introduces one such legal framework for categorizing Internet fakery—the NICE framework. After introducing NICE, a set of example regulatory initiatives are set forth, together with their analysis using the NICE framework.

A. NICE and Precise: Nature, Intent/Knowledge, Context Evaluation

The concept of untrustworthiness can be translated into a First Amendment–sensitive legal framework that consists of three separate inquiries or axes. The first axis involves an evaluation of the legal nature of the fake Internet content or conduct (N). The second axis in the evaluation involves the intent and knowledge of the faker and its legality (I). Finally, the third axis involves an assessment of the sensitivity of the context of the harm (C). These three elements are then blended into a single evaluation (E). Together, these four steps might be called the NICE framework (NICE).

1. Axis 1: The Legal Nature of the Fakery

To the rational mind nothing is inexplicable, only unexplained.571

—The Doctor

Axis 1 of the NICE framework involves an evaluation of the legal nature of the untrustworthy technology content or conduct. For this, we turn for guidance to the philosophy scholarship on the nature of fakery, specifically work on lying and deception. While there is no universally accepted definition of lying572 and deception,573 some definitions are a better fit for a legal fakery framework than others. To wit, we borrow the spirit of a key insight from the work of Professor Thomas L. Carson574: the act of lying is an invitation to trust where the person making the offer knows or has reason to know that the trust is misplaced.575 From this definitional insight springs a core distinction between lying and deception: Lying is extending the offer to trust and warranting its appropriateness576—i.e., making an untrustworthy offer. Deception occurs when that untrustworthy offer is accepted to the detriment of the acceptor577—i.e., successfully tricking a target with an untrustworthy offer. This key distinction bears echoes of the language of contract formation, a familiar paradigm for law that largely successfully coexists with First Amendment concerns.578 In the remainder of this Section, we build out four additional categories that also translate into other legally cognizable categories of fakery.

We propose that the legal nature of all technology fakery can be categorized as falling into six conceptual categories of untrustworthy content: bullshit, spin, half-truth, obfuscation, lying, and deception (BSHOLD).579 Each category of BSHOLD fakery will be discussed in turn below.

a. Bullshit

Philosopher Harry G. Frankfurt argues that “[o]ne of the most salient features of our culture is that there is so much bullshit.”580 In his seminal work, On Bullshit, he draws a distinction between lying and “bullshitting,” arguing that lying constitutes a conscious act of deception whereas “bullshitting” is “indifference to how things really are.”581 Frankfurt’s framing582 has been critiqued for both its failure to map cleanly to practical applications583 and its failure to comfortably map to legal categories offered by First Amendment jurisprudence. As such, we explicitly reject Frankfurt’s definition of bullshit and offer a different, First Amendment–sensitive legal replacement: Bullshit refers to expressive fakery created for satirical or creative purposes that contains obvious exaggeration and embellishment in the eyes of a reasonable person.584

For example, consider a music video available on YouTube where a Grammy-nominated585 performer clad in Holstein-themed garb asserts many times sequentially, “I’m a cow.”586 No reasonable person is likely to believe that she is, in fact, a bovine, no matter how many times she says “Moo.” This is prime cut, grade A Internet bullshit—it is creative, funny, engaging, and utterly (udderly?) ridiculous. It is also expression that a First Amendment analysis would protect. Thus, our definition of bullshit involves only obviously fake, creative, and satirical content that is uncontroversially protected by the First Amendment. This reframing avoids the ambiguities and pitfalls of Frankfurt’s bullshit definition.

By way of a second example, consider the various animal face filters and lenses available for streaming applications and social media sites.587 Cat filters in particular are a popular588 overlay that falsifies videos by replacing human body parts with cat-looking body parts. The resulting videos are obviously doctored for comic effect, and they are unlikely to deceive any human viewers of sound mind. Let us continue discussing cat filters and turn to the next category—spin.

b. Spin

In a now–(in)famous Internet courtroom incident, a Texas defense attorney experienced a technical difficulty with a Zoom filter, resulting in his image being projected to the judge and opposing counsel as a white, distressed kitten. The lawyer/kitten plaintively announced, “I’m here live. I’m not a cat”589 to the court, as he frantically fumbled with filter settings, trying to restore his visual humanity.590 While the filter itself might be classified as an act of humor and bullshit, in this case, the lawyer’s proclamation that he is, in fact, not a cat might be classified as something else—spin.

Spin refers to the strategic framing of content591 for advantage.592 Although the lawyer’s internal image appeared to the naked eye to be a cat, his audio feed informed the judge that he was “here live.”593 In this way, he attempted to strategically reframe the unexpectedly hairy encounter. His statement attempted to channel the court’s attention away from the talking white feline on the screen that belied his statement and toward the audio of his voice, highlighting his preparedness for the hearing.

The concept of “spinning” or presenting information in an optimally advantageous way for a speaker is certainly not new or legally unknown. Puffery is a well-established construct in both (pre-Internet) law of marketing594 and securities regulation.595 Or consider the functionality of photo-editing software. Editing images to make humans appear more in line with culturally specific standards of beauty is not a new practice. Indeed, an iconic image of President Lincoln is believed to have been a composite of Lincoln’s head on Southern politician John Calhoun’s body.596 However, at some point, these acts of strategic framing and spinning may cross over into more problematic territory—half-truth,597 lying,598 and deception.599

c. Half-truth

In technology contexts we often see fakery involving strategic omissions. For example, consider the infamous case of The Shed. In November 2017, the most highly rated of all the 18,000 restaurants in London listed on the travel information platform Tripadvisor600 was one called The Shed at Dulwich.601 Problematically, however, this dining “destination” was not actually a restaurant; it was a garden shed that had been hyped on Tripadvisor with fake reviews by friends and acquaintances of its Internet fakery-savvy owner.602 Allegedly because of his prior success with writing fake reviews, the owner decided to see whether he could make a completely fake restaurant into a hit.603 At first, he found excuses to turn potential customers away, but eventually, after reaching the number one spot, he invited three tables of customers to a meal at The Shed for a “press night,” where he served them microwave-ready meals.604 Thus, The Shed was itself an Internet half-truth, a fictional restaurant driven by Internet fakery and hype that eventually served a real meal.605

A half-truth involves a strategic blend of accurate description, material omission, and falsity that invites misplaced trust. A construct familiar to law, strategic omissions are already considered in various bodies of law such as contract and securities regulation. When material,606 they give rise to sanction through civil607 and criminal means.608 But the challenges of fake reviews such as those that propelled The Shed to culinary “standing” also lead us into our next category of fakery—obfuscation.

d. Obfuscation

In a famous scene of The Thomas Crown Affair,609 a business tycoon who recreationally steals art enters a crowded art museum to commit a heist, wearing a bowler hat and carrying a briefcase. A remote police team and insurance investigator vigilantly watch him through an intranet. As police move in to arrest him, he puts down the briefcase and picks up an identical, different one that is conveniently waiting for him. Suddenly, a swarm of haberdashed coconspirators arrive, walking in different directions, sometimes passing each other, swapping out identical-looking briefcases. The police team starts arresting bowler-hatted museum patrons to no avail—their target is gone. The swarm of bowler-hatted decoys with briefcases offers an example of fakery through obfuscation or noise, hiding certain useful alternative information in a glut of extraneous information.

Obfuscation or noise refers to the strategic creation of large amounts of content that makes material opposing information hard to find. For example, in the context of a brutally adversarial litigation process, a common obfuscatory tactic of aggressive counsel in discovery involves overproduction of documentation in response to a production request in order to “bury” opposing counsel in paperwork.610 This type of “document dump” forces opposing counsel to spend large amounts of time wading through nonresponsive information in order to find a needle in a haystack, a responsive document.611

In Internet contexts, recent hashtag takeovers by K-pop stans illustrate a somewhat parallel noise dynamic: K-pop fans have flooded Twitter with images of their favorite performers to dilute the visibility of content marked with a racist hashtag.612 The behaviors themselves are in line with the mechanics of distributed denial of service attacks (DDoS).613 In a DDoS614 attack, an attacker attempts to make a website or application unavailable by sending it a large volume of traffic from multiple sources, often using a botnet made up of compromised machines or devices.615 From the point of view of the website owner suffering a DDoS attack, the noise from the attack traffic makes requests for the site from ordinary legitimate users hard to identify;616 from the point of view of ordinary users, the DDoS attack makes the content of the website hard to discern, because it is hard to access or is unavailable. In the NICE framework, these are examples of obfuscation or noise.617

However, again, the specifics of the conduct in context frame the extent of harm. In some cases, the nature of the conduct is more than merely obscuring other information; the faker has warranted trustworthiness of content and context. When an explicit invitation to trust untrustworthy content is extended, that conduct qualifies as our next fake content category—lying.

e. Lying

In 2015, the U.S. Environmental Protection Agency announced that some Volkswagen diesel vehicles618 had been equipped with illegal defeat devices.619 In other words, the cars used software in the vehicle’s electronic control module (ECM) that sensed whether the vehicle was undergoing an emissions test, artificially altering the results to appear more favorable under test conditions.620 During emissions tests the ECM produced compliant emissions, but in other circumstances the ECM switched to a “road calibration” scheme which reduced the effectiveness of emissions control.621 The Volkswagen Group ultimately spent “tens of billions of dollars on regulatory fines and vehicle buybacks in the [United States] and [European Union],”622 and executives faced criminal charges in both the United States and Germany.623 In this scenario, Volkswagen was engaged in technology-enabled lying.

Lying refers to the act of unilaterally going on the record with an untrustworthy statement, inviting reliance upon it.624 Through its defeat devices, Volkswagen lied to environmental regulators: it unilaterally warranted particular test emissions as trustworthy to accurately reflect compliance with environmental standards and invited reliance on that warranty. Similarly, the FTC’s and the Consumer Financial Protection Bureau’s (CFPB) technology enforcement is often predicated on untrustworthy assertions contained in companies’ own advertising or website privacy policies—unilateral warranties that invite misplaced reliance.625 Or, in a securities regulation context, a public company’s false statement in its periodic disclosures that no material problem exists related to, for example, its information security626 would constitute lying under our definition; it is conduct that potentially gives rise to a basis for both SEC enforcement627 and investor litigation.628 It reflects a situation where a company goes on the record with an assertion about its internal technology operations, knowing that even the absence of disclosure will be taken as a warranty of no material omission. Regardless of whether any particular investor navigates to the investor relations section of the corporate website and reads the 10K containing the material omission, by making the warranty, in the eyes of the SEC, the company invites misplaced reliance on untrustworthy content—i.e., carries out the act of lying. However, if particular investors rely on the lie to their detriment and take steps in reliance upon it, the lie crosses over into an act of deception.

f. Deception

Between 2013 and 2015, Facebook and Google were allegedly scammed out of over $120 million through an email scheme involving fraudulent invoices for allegedly purchased equipment.629 The scam succeeded through phishing,630 in which an attacker sends deceptive emails or other online messages that are faked to appear to come from a reputable source. If the victim is persuaded, the directed actions generally defraud the target for the benefit of the attacker and/or result in the victim’s machine being infected with malware for later exploitation.631 In other words, the attacker engaged in lying to the targets and tricked the targets into taking action to their detriment based on untrustworthy content or conduct. This is the final category of technology fakery—deception.

Deception involves successfully tricking a target with lying, where the target acts in reliance on the untrustworthy content or conduct in a manner that causes detriment to the target. While lying is a unilateral act of going on the record with fakery and inviting reliance, deception is the bilateral act of warranting trustworthiness of fakery that successfully tricks the target into taking action in reliance. As previously described, phishing attacks, in particular, rely on triggering misplaced trust.632 They deceive through impersonation, attempting to convince the target of not only trustworthy content but also the trustworthiness of the sender, often engaging in conduct that is already prohibited, such as trademark infringement633 and using sender information that resembles634 that of the impersonated sender in order to intentionally cause confusion.635

But how do we analyze the less nefarious deceptions under the NICE framework? For example, how does a phishing attack differ from the situation where a friend merely copies and pastes the wrong link into an email? For this analysis we move to the second axis of the NICE framework—the faker’s intent and knowledge.

2. Axis 2: Intent and Knowledge of the Faker

Never gonna tell a lie and hurt you.

—Rick Astley636

Rickrolling is an Internet prank637 in which a faker deceives a target, tricking the target into clicking on a link that appears trustworthy but unexpectedly leads to a video of the 1987 pop hit “Never Gonna Give You Up” by Rick Astley.638 In practical terms, a cheeky friend639 tricking you with a Rickroll at worst triggers mild annoyance.640 On a technical level, however, misrepresenting the content of the link in order to elicit a target’s click constitutes a deception that is not dissimilar to a phishing attempt. Imagine if the linked Rick Astley video is not really a harmless act of Rickrolling from a friend; instead imagine the scenario where an attacker posing as your friend sends you a link to a malicious pretender video that impersonates Astley’s iconic one.641 As you admire Astley’s stylish trench coat,642 malware infects your machine, and “lulz”643 swiftly become replaced with “pwns.”644 Now consider a third scenario where your friend, a(n unironic) fan of Rick Astley’s music, intends to send you a link to a news article about a local political event but instead accidentally copies and pastes the link from another window that is playing Rick Astley’s music. As these examples demonstrate, although three acts of deception can initially technologically manifest in similar ways, the intent and knowledge behind each (and any subsequent harm)645 can be importantly different. To wit, intent and knowledge comprise the second axis of a NICE analysis.

As previously discussed, the law is comfortable with assessments of intent and knowledge of defendants, and such analyses are present in various legal regimes to provide granularity in analysis.646 For example, let us once again look to contract law, where deceptive intent and knowledge are derived partially from an analysis of the contractual relationship as a whole, including implicit power imbalances and equitable concerns.647 In particular, contract law imposes implied duties of good faith in performance and breach mitigation, and as part of its analysis the court incorporates its objective assessment of the parties’ intent.648 Contract law also explicitly contemplates questions of fraud and misrepresentation through the lens of intent, dividing misrepresentation into intentional and innocent rubrics.649 Materially different treatment is afforded by contract law to honest mistakes as opposed to intentional misstatements or omissions.650 A similarly nuanced analysis of intent and knowledge can be brought to bear in analysis of technology fakery through the NICE framework.

 

Returning to Volkswagen’s defeat devices as an example of lying, as the plea agreement in United States v. Volkswagen articulates, the defeat devices in question were developed and deployed in Volkswagen’s vehicles with the knowledge and supervision of Volkswagen employees651—strong evidence of intent to engage in illegal conduct. As another example of intent and knowledge playing a qualifying role, consider a purchase of fake followers for a social media account. The purchaser obfuscates the true popularity of the account, and such a purchase is an intentional act. However, some researchers have suggested the possibility that a sudden large increase in the number of fake followers of a particular politician’s account may have been the result of a non-supporter purchasing them to embarrass the politician without the knowledge of the politician.652 If so, in such cases the obfuscation was not deliberate on the part of the account owner. Thus, intent and knowledge are central to a nuanced determination of appropriate sanction for Internet fakery. So too is the sensitivity of the context in which the fakery causes harm.

3. Axis 3: Context Sensitivity

We didn’t focus on how you could wreck this system intentionally [when designing the Internet].

—Vinton Cerf653

In his seminal book Code and Other Laws of Cyberspace, Professor Lawrence Lessig recounts a story of two neighbors, Martha and Dank, and an incident involving the unfortunate poisoning of Dank’s dog. Lessig highlights that “[o]ne difference was the nature of the space, or context, where their argument was happening”654: it was within a massively multiple online game, and the dog was a virtual object in the game. As a result, compared with a dog poisoning in an offline context, the stakes are arguably lower, or at least different, as are the possible solutions.655 Yet, the nature of Martha and Dank’s relationship as (virtual) neighbors was a complicating element of the context.656 A fact-intensive inquiry is unavoidable.

The third axis of the NICE framework recognizes this insight and involves the (legally cognizable) sensitivity of the context in which the technology fakery occurs. Just as particular acts of fakery can vary in intent and knowledge, so too the sensitivity of the context varies, affecting the ultimate impact of the fakery. A context sensitivity analysis can be divided into four separate prongs of inquiry—stakes, substantiation, status, and solutions.

The first prong, the stakes of the context, involves an assessment of the severity of possible (legally cognizable) harms that could arise for targets of the fakery, as well as the number of targets impacted. For example, in a situation where Internet fakery may have directly or contributorily caused death, the consideration of stakes would push the assessment toward a more negative outcome for the faker and a strong connection to a set of grievous existing legal harms—homicide/ manslaughter and wrongful death. Consider the situation where an attacker spoofs the identity of an employee with administrative privileges to log in and change settings on a hospital system maintaining ventilators, resulting in the death of patients. Or consider the scenario where the ill-fated Fyre Festival might have resulted in deaths due to lack of potable water; in such a circumstance, the victims would have presumably relied on the untrustworthy hype about luxury accommodations and assumed that potable water was available on the island.657 If the context is sensitive enough to potentially result in or contribute to the death of unwitting targets, the context sensitivity is high and potentially merits greater sanction for the fakery.

The second prong, substantiation, asks whether a shared baseline assessment method on point exists, and whether the fakery at issue violates the expectations of the target on the point of that shared baseline.658 Through first establishing the existence of a shared baseline and valuation method,659 courts and legislators can eliminate the need for complex legal or philosophical discussions of “truth.” This approach of explicitly judicially incorporating shared baselines is not novel: contract law has long recognized the value of incorporating externally created baselines to resolve disputes between parties. For example, contract law cases regularly include judicial analysis in reliance on external pricing mechanisms,660 the norms of the course of dealings in the parties’ industry,661 word meaning as determined by outside experts,662 and other similar objective baseline determination methods to resolve disputes where contracts are silent or unclear. Similarly, as the FTC has recognized, the image of a lock triggers a shared consumer expectation of certain levels of encryption and security in information transmission on a website.663 If a website presents users the lock icon without also meeting the implicitly understood level of security, a shared baseline developed through both expertise and process is violated. Therefore, the context sensitivity is greater and the fakery arguably more worthy of sanction.

Third, the status prong involves an examination of the types of power imbalances disfavored by various bodies of law including contract law and criminal law—e.g., forms of severe information imbalance,664 manipulation,665 or access limitations to external assistance.666 The analysis of this prong should align with, for example, contract law notions of equitable and relational667 concerns, including the prior course of dealings of the parties, the involvement of third parties either in reliance or to further the subterfuge, as well as the extent of reliance by the target of the fakery. Just as in information-imbalanced contract situations, ambiguities should be construed in favor of the target subjectively under this prong. Thus, this prong injects some subjective analysis into an otherwise objective inquiry. In other words, while the second prong of substantiation engages with the objective question of deviation from recognized baselines, the third prong of status allows for recognition of subjective experiences of the fakery target, i.e., further recognizing the particularities of each context.

The final prong examines possible solutions. Just as in contract law, where financial compensation and other methods of correction can sufficiently remediate the harm, the sensitivity of the context (and the severity of the harm) are generally deemed less severe. The extent of accurate prior threat modeling and effectiveness of risk mitigation enter the analysis as well. The most severe situations involve harms for which solutions are not readily available through damages awards or through orders of specific performance—loss of life, harms where time was of the essence, national security harms, and the like. In such fakery cases, criminal sanctions might be favored over merely civil sanctions. As the nature of harm becomes more complex and difficult to cleanly articulate, this signal should serve as a harbinger of the lack of compensability and the failure of damages awards as adequate remedy. For instance, psychological and electoral harms intentionally inflicted on civilian populations with malicious intent through the use of PSYOP are an example of a severe harm where precise measurement is challenging. As the Army PSYOP Field Manual explains, even when results of PSYOP are visible, precise quantification and causal ties are difficult to prove.668 This does not change the reality of the negative impact and the appropriateness of sanction as a national security harm.

This four-prong context sensitivity analysis enables the creation of a common language around both economic and nonpecuniary harms across different categories of technology fakery. In particular, it avoids narrow Internet exceptionalist analysis that would hinder harmonization across international boundaries. Similarly, its technology-neutral framing assists with identification of relevant traditional legal approaches that can be married into analyses of technology harms. In this way, it is more likely to successfully avoid unintended spillover effects that damage established bodies of law—a problem already visible in some technology regulatory contexts.669

4. The Evaluation

The evaluation process in the NICE framework involves three steps set forth in the table that follows. The first step involves a categorization of the legal nature of the fakery. Second, a determination of a faker’s level of malicious intent and knowledge occurs. Third, the evaluation determines the legally cognizable sensitivity of the context and its impact on the harm resulting from the fakery. Finally, appropriate legislative and regulatory sanction is then considered based on the results of this evaluation, engaging with traditional legal paradigms as much as possible. The more obviously malicious the intention, the greater the intent or level of knowledge of likely harm arising from the fakery, and the more sensitive the context, the more appropriate the imposition of legal sanction.

To engage with a concrete application, let us return to cat filters. Clearly bullshit under our categorization, the application of cat filters by a user is usually intentional; however, as mentioned in Part III, there have been cases where it has also occurred unintentionally. Cat filters, even when unintentional and inappropriate in context, are unlikely to cause any legally cognizable harm if a product of user error. However, consider the scenario where the cat filter is applied by a third party through an act of computer intrusion. Here the context of the fakery modifies the analysis and the appropriate sanction: the sanction results not from the content itself but from the conduct underlying the Internet fakery—the intrusion. Thus, the appropriate target for any regulatory intervention is not the involuntary cat filter user, it is instead the intentional perpetrator of the crime of computer intrusion that resulted in the nonconsensual cat filter.

Next consider a doctored video670 where facial features have been adjusted,671 or where objects or people have been edited out672—a practice with a particularly problematic history.673 Editing of images may also occur unexpectedly in some technology contexts—without the knowledge of the person producing the images. For example, some applications correct eye-gaze direction,674 which although benign, has been described by some users as “creepy.”675 Alternatively, text giving important context for an image may be removed, for example, from a newspaper clip circulated online,676 or a video’s speed may be changed at critical moments, giving a false impression.677 In these scenarios, defamation claims may offer one viable remedy or could evolve to offer one.

Now consider a different, particularly challenging half-truth scenario—one where a flawed machine learning system is relying on inadequate training data to generate classifications of participants in, for example, a government benefits program, and impacted users might experience financial, safety, and dignitary harms.678 Here, while the legally problematic nature of the fake content is straightforward, the analysis of intent/knowledge and context becomes more complicated.

Finally, an Internet pump-and-dump scheme offers a straightforward example of an intentional deception in a sensitive context.679 Untrustworthy content and perhaps artificial amplification conduct induce reliance to the financial detriment of targets in such a scenario. The content is intended to defraud, and the context sensitivity is already legally recognized as high by securities regulation—investors suffer from an information imbalance and the government interest in preserving financial market stability through trustworthy financial disclosures is well established.

B. A NICE Future for Fakery Regulation: Addressing MIST

The preceding Sections of this Article have introduced the NICE framework for addressing technology fakery. This concluding Section crystalizes some of the possible legislative and regulatory lessons of the framework. It also highlights examples of specific types of regulatory interventions that may hold promise in line with the NICE framework.

1. Addressing Manipulation

In addressing manipulation, as explained previously, a prohibition on false content remains a viable approach if carefully crafted. In summary, even after Alvarez, legal restrictions on false speech are most likely to survive First Amendment scrutiny where they reflect four criteria. First, they target a traditional category of harm previously recognized by the Court680 unrelated to a particular (competing) communicative message.681 In order for any outright prohibition on false content to survive First Amendment scrutiny, the framing of the prohibition must start from the identification of a narrow, specific traditionally recognized state interest. That list is short: preservation of fair bargaining in the marketplace, national security, public safety, administration of justice, and preservation of other core governmental functions. Second, the regulation is drafted in a manner that demonstrates a causal link between the restricted false content and the harm, implicitly underpinned by malicious intent/knowledge of the faker.682 Third, the selected regulatory framing should be one that promises greater efficacy, explaining why less burdensome ways driven by counterspeech have failed or would prove ineffective.683 Finally, the commercial versus noncommercial identity of the speaker should not be a determinative element under the statute.684 If properly drafted, restrictions on falsity in particular technology contexts that meet these criteria can offer one possible regulatory intervention for addressing a portion of technology fakery.685 As the standards for commercial and noncommercial speech continue to merge, so too does the treatment of natural versus corporate persons under the law. The legislative and regulatory approaches most likely to survive First Amendment scrutiny are those blind not only to the message of a speaker but also blind to both the content’s commercial or noncommercial nature and the identity of the speaker as a human or corporate person.

As a first cut, however, the most promising approaches in addressing manipulation in the short term are those that rely on labeling and additional disclosure; they can be scaled quickly. For example, because First Amendment rights apply only to U.S. persons,686 labeling requirements for content produced outside the United States by non-U.S. persons that target U.S. audiences would likely survive First Amendment scrutiny, particularly if limited to sensitive contexts such as election communications where foreign direct involvement is already legally regulated.687

2. Addressing Impersonation

Legal approaches that focus on prohibiting impersonation of identity of legal persons hold promise.688 For example, they might target misidentification of source and user confusion in a manner reminiscent of frameworks in trademark. While preservation of pseudonymity holds value in furthering discourse, impersonation of another existing person’s identity (in a nonsatirical manner) is not an act of pseudonymity—it is an act of goodwill usurpation and source misidentification. For example, statutes that prohibit Internet use of another person’s identity for purposes of fraud or voter manipulation might offer a natural extension of existing law: voter fraud and identity theft statutory frameworks on point such as the Identity Theft Penalty Enhancement Act689 and similar statutes already exist in both state and federal law.690 Additionally, state data security or privacy statutes might be expanded to limit the repurposing of particular categories of residents’ personal information for any commercial or noncommercial purpose, creating private rights of action. Finally, defamation statutes might be updated with technology-neutral language, eliminating a portion of the current obstacles to the creation of more robust and equitable doctrines of Internet defamation.691

3. Addressing Sequestration

To mitigate sequestration, a statutory disclosure requirement might mandate the creation of a public repository where all variants of candidate-sponsored and PAC-sponsored political advertising targeting U.S. citizens must be filed shortly after first use. Such a repository could be jointly managed by the FEC, DOJ, and Cybersecurity and Infrastructure Security Agency and would serve national security and election integrity interests. It would offer both third-party researchers and government enforcers additional tools to identify foreign election interference and other problematic Internet fakery dynamics. Such an approach would also likely survive First Amendment scrutiny.692 Another possible regulatory avenue involves strengthening disclosure requirements for paid endorsements. In particular, relevant regulators might strengthen existing disclosure requirements for social media influencers and increase enforcement actions targeting unlabeled paid promotions by social media influencers.693 Other approaches are also possible, including potentially an influencer or targeted advertising “broker dealer” registration and disclosure scheme modeled on securities regulation.694

4. Addressing Toxicity

In addressing toxicity, separate from any restriction on untrustworthy content, restrictions on certain types of amplification conduct and personalization/targeting may be appropriate. Such restrictions might be based on a broader set of governmental interests, including preserving quiet enjoyment of a space, preserving election security, or preserving market fairness.695 Just as we restrict DDoS attacks696 and bot-facilitated purchases of concert tickets,697 artificial “straw amplification” of content created by U.S. persons may be restricted, recognizing that such a regulation should be cautious to avoid any assessment of the underlying message itself or a restriction of the underlying channel for unamplified use.698 Similarly, Congress might instruct that agencies such as the FEC, FTC, SEC, and CFPB introduce targeted/personalized messaging prohibitions where critical infrastructure integrity interests are at stake.699 Additionally, registration and conduct reporting requirements for public relations firms in a manner parallel to the structures in place for various participants in financial markets might begin to mitigate the problems of fakery amplification by dark PR and PSYOP professionals for hire.700

Conclusion

Oh, what a tangled web we weave

When first we practise to deceive!701

For five years, a ten-foot bronze spider sculpture created by artist Louise Bourgeois sat on Pier 14 of the Embarcadero in San Francisco, welcoming visitors and residents alike.702 On loan from a private collector, the giant spider had originally been slated to remain on display for only eight months.703 However, because of public fascination with the art, the run was extended.704 An inspirational and somewhat controversial piece of art, the crouching spider triggered years of awe, appreciation, and debate.705 Bourgeois’s giant spiders offer an arachnid contrast to the tarantulas that lived in AT&T’s basement and to Lucian’s giant spiders. Unlike those menacing arachnids, Bourgeois’s spiders engaged with the public without guile or risk of harm, sometimes causing spirited disagreements about their merits.

The model of Internet fakery regulation that offers us a path forward is one that leads us closer to the spirited debates caused by Bourgeois’s spiders and further away from looming debacles of AT&T’s and Lucian’s spiders. As this Article has explained, the first step in this enterprise involves crafting a common baseline among policymakers and regulators around a legal concept of “untrustworthiness” through the NICE framework—an evaluation driven by the legal nature of fake content, the intent and knowledge of the faker, and the sensitivity of the context in which the fakery occurs.


* Associate Dean of Innovation and Professor of Law and Engineering Policy, Penn State Law (University Park); Professor of Engineering Design, Penn State Engineering; Founding Director Penn State Policy Innovation Lab of Tomorrow (PILOT); Founding Director, Penn State Law Manglona Lab for Gender and Economic Equity. The authors wish to thank Michael Antonino, Gaia Bernstein, Matt Blaze, Leisel Bogan, Julie Cohen, Lilian Edwards, Mark Geistfeld, Sue Glueck, Tatters Glueck, Gray C. Hopper, F. Scott Kieff, Christopher Marsden, Gaela Normile, Martin Redish, Daniel Susser, Marcia Tiersky, and Jessica Wilkerson. **Honorary Lecturer, Centre for Doctoral Training in Interactive AI, University of Bristol; Academic Affiliate, Penn State Policy Innovation Lab of Tomorrow (PILOT).