Theorists who contend that tort is designed to do justice cannot explain strict liability. The strict sector plagues these scholars because it extracts payment from defendants who have acted reasonably and are therefore considered innocent. If tort is about wronging and recourse, then strict liability makes no sense. Stymied, justice theorists have ceded the sector to economically minded counterparts who are concerned primarily with efficient market outcomes. As this theory has taken hold, some have declared strict liability “dead.” This Article offers a justice theory of the interpersonal wrong that permits liability in the absence of traditional fault—namely, the delegation of relational labor to inanimate, care-insensitive instrumentalities. These delegations may be efficient and low risk, but they are genuine wrongs because they treat relational counterparts as unworthy of authentic human care. This theory not only explains long-standing strict liability for activities like blasting, but it also has the power to address the modern wrong of injury-by-algorithm. Indeed, as the regulatory state permits market scions to replace real relationships with artificially intelligent ones, tort may be the only body of law able to guarantee that technology serves society and not the opposite.
The Article starts with a baseline proposition: tort constructs communities by facilitating care between their members. How so? Communities want their members to maximize self-interest while minimizing other-harm. The ability to do this arises from human neurocognition. Specifically, community members are able to gather sensory data about others in the same problem space, leverage tacit knowledge to contextualize others’ likely behaviors, and adapt their goal pursuits in real time to avoid imminent injuries. The rules of tort surface these steps in the neurocognitive sequence and formalize the community expectation that members will follow them. For example, without explicitly referring to neurocognitive concepts, many negligence doctrines nevertheless isolate incompetent perception, contextualization, and action as signifiers of carelessness. On a cognitive theory of relational care, community members can also wrong their counterparts by delegating relational labor to instrumentalities unable to perceive, contextualize, or act as a human would. Actors who use these instrumentalities as relational replacements are consciously renouncing the ability to give authentic care. Consequently, even when reasonable, such delegations are relational failures. A justice version of tort activates strict liability to order compensation for the harms they cause.
Understood as a law of relational justice, tort is fully equipped to deal with even the most disruptive technologies. The Article makes this case using as examples three AI innovations now populating modern life: autonomous vehicles, robot journalists, and facial recognition technologies. These nonhuman instrumentalities serve drivers, publishers, and police in a way that seems reasonable and nonnegligent in the aggregate. But they have already caused death, defamation, and racially biased arrest—all without serious regulatory discipline. A justice version of tort built around the obligation to exercise human relational cognition can determine—objectively and publicly—whether contested AI applications are failing the community. When actors use these technologies to supplement their relational labor (think driver-assisting cars), they retain ultimate control and the ability to give authentic care. In these cases, a negligence rule will identify wrongfulness by searching for incompetent execution of the cognitive sequence. But when actors use these technologies to replace their relational labor (think driverless cars), they forfeit ultimate control and renounce the ability to give authentic care. Even when the technology at issue is efficient and low risk, a strict rule will identify this renunciation of the care obligation as a distinct kind of wrongdoing. Justice theorists have been “embarrassed” by strict liability for too long. Identifying the relational wrong at its core may fortify tort to do justice in the coming era of delegated care.
Introduction
Theorists who contend that tort is designed to do justice cannot explain strict liability. The strict sector plagues these scholars because it extracts payment from defendants who have acted reasonably and are considered innocent. If tort is about wronging and recourse, then strict liability makes no sense. The theoretical vacuum left by this conclusion has been filled by tort instrumentalists. They justify strict liability as a path to systemic policy goals like social insurance or cost spreading, which have little to do with righting wrongs. Of course, when strict liability is the handmaiden of economic policy, judicial willingness to apply it will rise and fall with the mood of the market. And against the market exuberance of the past decade, some have declared strict liability “dead.” This Article mounts a justice defense for strict liability, arguing that even reasonable actors can commit wrongs. On a justice account, the purpose of tort is to facilitate authentic human care between community members. So, defendants wrong their relational counterparts when they delegate to nonhuman, care-insensitive instrumentalities the task of executing relational behavior. These delegations distort the paradigmatic bilateral relationships that lie at the heart of tort into synthetic, trilateral counterfeits. Using inanimate technology like dynamite or artificial intelligence (AI) is often low cost and beneficial across a universe of cases. But it impedes personal caregiving, will inevitably cause a trace number of injuries, and falls short of tort’s relational expectations. Because such delegations can be fully theorized as genuine wrongs, subjecting the injuries they cause to a strict rule need not “embarrass” justice theorists, as it long has. On the contrary, the Article contends that strict liability is poised for a comeback, just in time to manage the growing problem of injury-by-algorithm.
The piece begins in Part I by reviewing strict liability doctrine over time. For nearly a century, judges and torts scholars have explained this sector as a necessary backup response to undertakings that elude negligence liability because they are reasonable, but nevertheless involve “extraordinary” dangers for which the defendant should pay. This explanation has tripped up scholars who consider tort a law of relational justice, because they have never been able to isolate a specific kind of unjust relationship that causes extraordinary danger. Instead, they have capitulated to an instrumental version of strict liability, which draws on political and economic values outside of tort to identify select dangers as “extraordinary.” Once strict liability is reduced to a policy tool, there is nothing to stop legal decision-makers from classifying the undertakings they favor for policy reasons as ordinarily dangerous and applying a lenient negligence rule that operates as an economic subsidy for those activities. In fact, the modern judicial reluctance to invoke strict liability rules has rendered the sector moribund.
Part II lays the foundation for reviving strict liability as a tool of tort justice by highlighting the role of human cognition in the delivery of interpersonal care and in the DNA of tort doctrine. It first frames tort as a body of law that helps to construct informal, sociological communities. Unlike polities, which follow top-down directives from the state, sociological communities achieve solidarity and coordinate social life through horizontal, person-to-person relationships. As structurally equal community members move through the world, they are expected to maximize self-interest while minimizing other-harm. And the community expects them to strike care-risk setpoints that reflect the group’s shared values. Striking this balance is impossible without the human ability to place others in social context. Human cognitive processes allow actors to intuit how their goal-oriented behavior will affect others, to evaluate whether that outcome would be accepted by the community, and to make injury-avoiding adaptations in real time. So, it turns out that the relational cognition biologically unique to human beings is what produces interpersonal care that reflects community notions of justice. Conversely, when an actor ignores or disregards social context and invades an other’s well-being to advance his self-interest, the community condemns that suspension of care as a wrong requiring repair. This outcome does interpersonal justice within the relationship; it also announces fairness norms to the group at large. As the law of community construction, tort surfaces and formalizes these community-constructing processes. Bilateral relationships are the unit of community construction, and bilateral disputes are the unit of tort adjudication. Further, because communities expect their members to bring human intuition to interpersonal conflict, the rules of tort recognize that attention to social context is a precondition for the delivery of care.
Part III uses these insights to explain that what it labels “trilateral” relationships are intrinsically dangerous for reasons having nothing to do with policy and everything to do with interpersonal justice. In the traditional bilateral relationship, an actor engages directly with another as he pursues his self-interest. Bilateral relationships carry the threat of only ordinary danger because throughout his goal pursuit, the actor’s human cognition assigns social significance to the other and prompts the actor to adapt his goal strategy in real time to avoid harming the other. So, bilaterality facilitates the observation of community norms and prosocial coordination. In trilateral relationships, actors pursue self-interest by deploying nonhuman instrumentalities to replicate their personal labor. A trilateral relationship poses extraordinary danger because the actor delegates his goal pursuit to a mechanism that lacks human cognition. Inanimate third parties like dynamite and algorithms have inherent action properties that save the actor time and money on the way to his goal. But, once activated, they lack the human ability to assign relational significance to community members in their path. And the very action properties that make them efficient negate their adaptability when injuries are imminent. So, trilaterality thwarts the observation of community norms and prosocial coordination. For this reason, creating a trilateral relationship can be understood as a freestanding wrong even when the nonhuman intermediary is economically rational and prone to operate safely in most use cases. Once extraordinary danger is defined as the creation of trilaterality, strict liability case law makes perfect sense as a jurisprudence of justice. Defendants are held to a fault standard when they retain personal control over risk management, but they have historically been held to a no-fault standard when they delegate to risk-insensitive instrumentalities the power to take action that affects a plaintiff.
Finally, Part IV applies the trilateral theory to the emerging problem of injury-by-algorithm. Many experts anxious to integrate artificial intelligence into modern life have urged adoption of specialized regulations or new tort regimes to handle automated injuries. But clarifying the respective dangers associated with bilateral and trilateral relationships retrofits the venerable rules of tort to tackle emerging technologies without relying on a captured regulatory state or on common-law rules developed specifically for AI. When AI assists actors who retain control over their relationships with others, the result is bilaterality, and applying a negligence rule is fair. When AI replaces actors’ relationships with others, the result is trilaterality, and justice requires a strict liability rule. Part IV applies these insights to three modern uses of AI—autonomous vehicles, robot journalism, and facial recognition technology—to show how tort should treat the injuries they cause. The Article concludes that not only does strict liability do justice, but it does an indispensable kind of justice in the age of automated care.
I. Tort’s Category Problem
Over the past two centuries, American personal-injury law has undergone significant evolution, moving away from a writ-based system, and settling into the three modern categories: intentional wrongs, negligent wrongs, and strict liability wrongs. Although the tripartite organization of tort is well accepted today, scholars have long wrestled over the relative importance of the negligence and strict liability categories.1 In fact, this debate has been aggravated over the past several decades, as tort has come to be understood as an instrumental body of law serving external goals like efficient resource allocation.2 As the efficiency school has ascended,3 the fault and no-fault categories have been evaluated purely in terms of the instrumental results they produce.4 Negligence has emerged as the preferred category,5 while strict liability has been relegated to a backup role, trotted out to force the odd reasonable actor to internalize costs when his rational undertaking causes undesirable harms.6 Of course, when strict liability is treated as a mere policy expedient to curb edgy actors, rather than as an independently just mode of liability, its reach will shrink in proportion to the market’s appetite for edginess. Those who would vitiate strict liability are signaling that economically appealing risks are beyond the reach of the law. A muscular new school of corrective-justice scholars has challenged tort instrumentalism in recent years. But its members have struggled to fortify strict liability because they cannot explain why it is just to make reasonable actors pay for ostensibly innocent behavior.
A. Theories of Strict Liability: The Rise of Instrumentalism
During the early years of the American legal system, “there was a general private law presumption in favor of compensation,”7 which meant that liability was often assigned in the absence of fault. Courts occasionally supplemented this no-fault rule with negligence liability when an actor shirked a duty arising from his status (as a sheriff or a common carrier, for example), or resulting from two parties’ agreement to exchange care for risk.8 As nineteenth-century technological and economic advances began to bring strangers into the same physical or commercial environment, the law of tort had to make a choice. It could maintain a baseline assumption that individuals had a duty to avoid causing any injuries to any strangers they encountered.9 Or it could hold that individuals were obliged to avoid causing only unjustified injuries to strangers who entered shared commercial or geographic space. In essence, judges were being forced to choose between a strict liability default and a more flexible negligence scheme that stigmatized a much smaller universe of faulty injuries only.10 Realizing that the “premodern”11 strict liability rule could inhibit the burgeoning industrial sector of the day,12 judges increasingly opted for a negligence regime that would shelter risk-taking defendants from liability so long as their behavior was deemed “reasonable.”13 The negligence principle was thought to be suited to a culture “that was changing to value action and innovation rather than passivity and obedience.”14
Despite their enthusiasm for negligence, twentieth-century tort leaders resisted the complete displacement of strict liability.15 Unfortunately, they were unable to agree on why it continued to serve a purpose even as negligence was ascending. This incoherence arose from a deep theoretical crisis emerging in personal-injury law as academics were starting to reckon with the gap between “law in action” and “law in the books.”16 These Legal Realist leaders began pointing out that legal doctrines long accepted as inevitable and just manifestations of shared values and morals could be better explained as rules that furthered policy goals preferred by elite institutional actors.17 They urged judges to exercise that policymaking function transparently, and toward socially desirable ends.18
Some members of this Legal Realist school embraced the strict liability sector as an overt instrument of “social policymaking.”19 For example, Fowler Harper advocated strict liability for undertakings highly likely to cause injury “to entail the least hardship upon any individual and thus to preserve the social and economic resources of the community,” purely as a matter of “social engineering.”20 William Prosser, too, advocated strict liability for a narrow range of undertakings causing “inevitable” harms, not because those harms were the product of “moral[] or social[] wrong, but because as a matter of social engineering the responsibility must be [that of the actor who undertook the behavior].”21
In the shadow of this instrumental movement, a committed school of tort leaders continued to position tort as a law of rights, wrongs, and interpersonal justice. These scholars, too, wanted to retain a strict liability sector. But unlike Harper and Prosser, they thought that certain undertakings were genuine interpersonal wrongs that merited liability in the absence of carelessness or fault. For example, Thomas Atkins Street suggested that the kind of danger triggering strict liability resulted from inherent properties of the harm-causing thing or creature.22 Instrumentalities were dangerous per se, he suggested, when they had a “mischievous propensity . . . to do damage,” typically because they lacked human consciousness and would “stray” without adequate human oversight.23 Such “propensities” appeared to result from biological or chemical properties inherent to the thing, as Street went on to describe animals, fire, and explosives as belonging to the relevant category of “inanimate” instrumentalities that could “go about to do” things without the direct involvement of the owner or user.24 Defendants who owned these instrumentalities and set them in motion were responsible for the injuries they caused.25 Torts scholar Clarence Morris echoed this theory in 1952, calling strict liability “liability for an escaping inanimate destructive force.”26
The debate over justice and instrumentalism plagues tort law to this day, deepening into what one modern observer has called a “cold war.”27 And strict liability continues to operate as a contested site in that war. However, that contestation has in some ways been papered over. Antagonists long ago settled on shared, but vague, language to describe the behavior giving rise to strict liability: reasonable undertakings that create “extraordinary danger.”28 This formulation creates the impression of determinacy but is actually a fairly open-textured descriptor.
Instrumentalists have little trouble assigning meaning to the extraordinary-danger descriptor because they see tort as a subsidy for desirable policy outcomes.29 Consequently, “extraordinarily dangerous” undertakings are those that produce results inconsistent with the political economy they want tort to produce.30 Of course, if strict liability is understood as a lever to shape the political economy, the power of the sector will wax and wane as different market configurations go in and out of fashion.
Though tort formalists and tort instrumentalists were pulling away from each other in the early twentieth century, the politics of the day explain why their antipathy did not boil over in the strict liability sector. The political commitments of the instrumentalists at the time led them to a strict liability rule that produced the same functional results sought by the justice school. Influential progressive era scholars who accepted tort as an engine of social change were interested in a law of strict liability that would support economic equality and corporate responsibility when doctrines like assumption of risk were limiting negligence liability.31 The American Law Institute (ALI), for example, described strict liability in the Restatement (First)32 in terms that “expand[ed the sector] . . . on progressive enterprise . . . principles.”33 And, as long as tort was being deployed as a worker- and consumer-friendly instrument, those who saw tort as a law of relational justice and equality had little reason to quarrel.
This strict liability détente fell apart as tort instrumentalists shifted toward a more laissez-faire ideal for the American political economy and grew wary of strict liability. For example, when Reporter William Prosser described strict liability in the Restatement (Second), he did so in “modest” terms.34 Though he acknowledged the need for a strict sector in his treatise,35 he engineered that sector to have a narrow reach in the Restatement. Specifically, he invited courts to maintain a fault requirement for activities that carried a notable risk of harm, so long as judges found them “valuable” to a community.36 And where the Restatement (First) isolated for possible strict-treatment activities whose serious risks “could not “be eliminated by the exercise of the utmost care,”37 Prosser’s version discouraged courts from using the strict rule by cautioning that “[m]ost ordinary activities can be made entirely safe by the taking of all reasonable precautions.”38
Diluting the test in this fashion ultimately shrunk the reach of strict liability and expanded the reach of negligence.39 In the years since the Restatement (Second), judges have “confined strict liability . . . to a . . . narrowly defined set of activities,” and negligence now governs the vast majority of undertakings.40 The Restatement (Third) changed the language of the strict liability test, but not its substance.41 Overall, the Chief Reporter of the Restatement (Third) of Torts, Intentional Torts to Persons, has summarized that “very few activities have been found to be abnormally dangerous under the Restatement (Second)’s test, and the Restatement (Third)’s test is unlikely to change this result.”42 By some lights, strict liability is functionally “dead.”43
B. Theories of Strict Liability: The Collapse of Justice
Under instrumentalism, the reach of strict liability was expansive when progressive ideals were in fashion early in the twentieth century and contracted when laissez-faire ideals took center stage later in the century. Justice theorists of tort have watched these gymnastics from the sidelines, declining to join the policy debate because they insist that tort should follow an internal logic immune to external considerations.44 They believe tort produces justice by identifying and redressing relational wrongs.45 But they have simultaneously “embrace[d] the idea that strict liability as generally understood does not involve wronging.”46 So, unlike instrumentalists who candidly treat strict liability as a tool of external policy preferences, justice scholars casting about for an internal principle rationalizing this sector are in a “predicament” because “[s]trict liability is an embarrassment to their theories.”47
Some justice theorists have escaped this predicament by claiming that strict liability is just because it forces compensation for injured plaintiffs who cannot succeed in negligence. For example, John Goldberg and Benjamin Zipursky claim that strict liability is a device that guarantees recourse for those injured by defendant behavior that was careful, but still “unilaterally impose[d] well-known, well-defined, and substantial risks upon others.”48 Curiously, they justify strict liability as a response to “substantial” risk taking,49 but do not explain the difference between substantial risks and ordinary ones.50 Gregory Keating has followed similar logic, saying that careful-but-harmful behavior that does not involve a “conduct-based” wrong, may be a “conditional” wrong where it causes “unreasonable” harm.51 He defines harm as “unreasonable” “in circumstances where it would be unjust for the injurer to make the victim bear that cost.”52 But those circumstances are not detailed.53
Other justice theorists make a more direct case that there is a real wrong underlying strict liability. For example, Tony Honoré has argued that one result of shared “personhood” is “responsib[ility] for our actions and their consequences.”54 In negligence, responsibility is conditioned on fault, while in strict liability, responsibility is conditioned on conduct that “carries with it a special risk of [socially dangerous] harm.”55 While Honoré’s defense of strict liability as a moral rule turns on “social danger,” he offers no principle that separates ordinary behavior from its “socially dangerous” counterpart. John Gardner, too, defends holding actors strictly liable when they have “fair warning” that they are embarking on an activity preselected for strict treatment.56 But Gardner does not specify which activities should be preselected.
In the same vein, George Fletcher has argued that liability arises when “the defendant created a risk of harm to the plaintiff that was of an order different from the risks that the plaintiff imposed on the defendant.”57 He has pointed to blasting, fumigating, and crop dusting as examples of nonreciprocal risks in the community because they exceed the level of risk to which all members of the community contribute in roughly equal shares.58 Unfortunately, Fletcher’s reciprocity theory is plagued by the same vagueness found in other moralists’ accounts. Risks are “nonreciprocal” when they are of “an order different” from other risks.59 But he never specifies the magnitude of difference that renders a risk nonreciprocal.
In sum, many justice theorists have tried to explain why the activities subject to no-fault liability are genuine wrongs, but their accounts all run aground on the same shoal. Buying in to the agreed Restatement description of strict liability’s purview, they speak of a difference between “ordinary” dangers and “extraordinary” ones. But they do not give a determinate test for distinguishing between the two that reflects tort’s internal, relational logic.60 Justice theory has left itself vulnerable to critique because its theory does not persuasively explain all three sectors of tort liability. Instrumentalists do not face the same vulnerability because, for them, the line between ordinary and extraordinary danger is dictated by economic or policy considerations. In the twenty-first-century political economy, virtually all rational risks are welcomed, and strict liability is tort’s phantom limb.61
C. Reviving Justice, Reviving Strict Liability
Reducing tort to an efficiency-based rule of negligence may accelerate market growth that benefits many. But it leaves injured people without recourse when efficient behavior hurts them.62 For those who believe tort is a body of law about private relational wronging independent of external economic considerations, strict liability promises a solution to this dilemma. To date, justice scholars have not coalesced around a determinate model of “extraordinary danger” or a theory of reasonable wrongdoing. But they do agree on one foundational concept: tort’s underlying goal is to facilitate the just conduct of community life.63 This concept sets the stage to identify the wrongfulness lurking beneath some reasonable behavior, as discussed in Parts II and III below. Identifying the relational wrongs that can live in reasonable activities frees tort to stigmatize those wrongs even when they are cost justified and profitable.
To be clear, the objective of this Article is not to change tort from an instrument of laissez-faire economics into an instrument of progressive economics, and it is not to express policy opposition to adoption of artificial intelligence in modern life. Its goal is to decouple tort from the economic-policy question altogether by reviving a relational understanding of “wrongdoing.” That said, if wrongdoing is no longer equated with forgoing cost-justified preventive measures, tort may sweep a greater number of profitable behaviors within its ambit. This is not the goal of tort-as-justice, but it is a happy byproduct. Economic “welfare” may have been an effective proxy for relational “well-being” at one time,64 but wealth inequality has deepened enough to raise questions about the legitimacy of that proxy today.65 In fact, it may be no coincidence that the last time tort theorists fought to preserve strict liability as a counterweight to the negligence principle66 was in the 1920s and 1930s,67 when the gap was equally wide.
II. Bilateral Relationships: Cornerstone of Community, Cornerstone of Tort
Tort can be helpfully understood as a body of law that does justice by constructing coordinative communities that provide individuals both security and fulfillment in areas beneath the notice of the state.68 It does so, I have argued elsewhere, by asking the group to determine when one member’s pursuit of fulfillment has imperiled another member’s security.69 These determinations accrete toward a group understanding of the behavior that is mutually considered “just.” That is, one-on-one disputes subject to community judgment produce remedial orders that return individual adversaries to coequal status, and simultaneously signal behavioral norms to the group at large.
Because tort can be described as a law of community construction, it is no accident that community custom and tort doctrine both treat person-to-person relationships as the building block of prosocial conduct.70 Unlike political communities, which follow top-down directives from the state, sociological communities operate through horizontal, one-on-one conduct. Structurally equal community members are expected to perpetuate group solidarity by maximizing self-interest while minimizing other-harm, all within the group’s behavioral boundaries. Individuals are able to strike this balance because of unique human cognition that allows every community member to place fellow community members in social context. Human actors are biologically equipped to intuit how their goal-oriented behavior is affecting others, to evaluate whether that effect is community approved, and to make injury-avoiding adaptations in real time. In other words, the relational cognition unique to human beings is what produces the interpersonal care that reflects community notions of justice. When an actor ignores or disregards social context and invades an other’s security while pursuing fulfillment, the community may reject that behavior as an undesirable failure of care.
Tort helps to construct community by surfacing and formalizing this cognitive care–production process. Just as bilateral relationships are the unit of community construction, bilateral disputes are the unit of tort adjudication. And just as communities expect their members to exercise human cognition to pursue self-fulfillment without endangering other-security, tort identifies and formalizes as doctrinal rules the steps in the cognitive sequence that produce care.
A. Bilateral Relationships as the Unit of Community Conduct
Law operates simultaneously within multiple, overlapping communities. Public law, for example, is produced by and governs political communities.71 Political communities have been described as “artificial union[s] resting solely on rational deliberation regarding means to an end or goal.”72 Members of polities formally agree on organizational goals. In these “justification to the world at large” social arrangements, the agreed goal is the accepted mechanism for delivering value and measuring behavior.73
In contrast, private law is produced by and guides sociological communities. A sociological community is “a natural organic grouping of individuals.”74 These communities “unit[e] . . . a plurality of subjects, and . . .[bear] a life which is carried on in and through th[o]se subjects.”75 In such social arrangements, members have not explicitly agreed on a single shared value they will jointly further. Rather, they constantly “face[] the other [members] as subject to subject” to work out mutual accommodations that are approved or condemned by the community.76 This “justification to subject” structure means that if the community is to flourish, falter, or modernize, it will do so incrementally through these face-to-face, or bilateral, relationships. In other words, the outcomes of individual relationships accrete over time to construct the direction and norms of the community as a whole. As philosopher John MacMurray has written, “The structure of a community is the nexus or network of the active relations . . . between all possible pairs of its members.”77
It is through the conduct of discrete bilateral relationships that the community heuristically identifies “shared motives” and moves toward shared goals.78 How so? Community members are structural equals, each with the same entitlement to fulfillment and security. But at the same time, they have individualized goals and preferences. Consequently, when two or more community members come into the same “problem space,” they may clash. These clashes cause friction in the moment, but they may also be opportunities to clarify community preferences.79 In other words, resolving interpersonal conflicts creates and maintains community solidarity.80
Importantly, even where two individuals’ goals are at odds, their behavior is “not necessarily designed to harm the other[] but rather [is] the result of . . . [their respective interests in] self-enhancement.”81 Consequently, when such goal disconnects arise, an actor can choose one of three action programs. He can suppress his own free will and yield to the will of another. He can fully indulge his own desires, thereby suppressing the other’s free will. Or he can “maintain “systematic cooperation” by submitting to a “set of rules or principles which are the same for all and which limit for each the use of his own power to do what he pleases.”82
The third of these action programs, systemic cooperation, is the preferred mode of community conflict resolution. The advantage of sociologically compact communities is their ability to establish informal frameworks for frictionless living, thereby reducing the need for costly state rulemaking, prosecution, and punishment.83
But informal frameworks may have limited reach, and they may grow obsolete. This is why communities need and benefit from conflicts that elude cooperative resolution. When a member claims that his interest in bodily security, property, or personality has been injured by an actor’s pursuit of self-interest, the community has a chance to evaluate whether the actor’s zeal was within or outside the range considered acceptable by the community. In other words, these disputes are opportunities for the community to reexamine and publicize its norms and values.84
When one-on-one disagreements arise, community members are invited to decide if either actor “attache[d] a greater weight to [his] own interests than to the interests of others” and to stigmatize those who have done so as unacceptably “egoistical or antisocial.”85 These decisions can affirm existing norms by applying previously identified boundaries to an actor’s aggression.86 Or they can create new norms by identifying new boundaries that keep pace with economic, technological, or cultural changes. In other words, bilateral disputes that elude coordinative resolution are a second-best outcome for the participants, but a valuable opportunity for the group. These disputes “help to make social pathologies visible and . . . contribute to the restructuring of a society’s institutions and to the creation of new participative regulations. . . .”87 Once a community has reaffirmed or modernized norms, members have clear signals about the acceptable range within which they are expected to balance self-interest and other-care. This is the path of “mutual influence” that generates community.88
Crucially, not all person-to-person interactions arise from explicit, voluntary, or long-term relationships. Many involve strangers whose individual goals compel them to share limited space or allocate scarce resources with others whose goals conflict. These interactions, even when fleeting, impersonal, or inadvertent, are still “relational” because each actor’s conduct will alter the world in which the other moves.89 When parties fail to coordinate these interactions within the community’s behavioral norms and an injury results, the community can identify that failure as a wrong and hold one or both of the parties responsible for it.
In short, each subject-to-subject interaction is a site of mutual influence within which actors strive to retain the resources they most value while sharing or exchanging resources that others more deeply value, be they wealth, physical autonomy, social capital, or emotional safety. Two parties with different goals and a range of behavioral options are expected by their communities to devise a coordinative action plan that produces a balance of self-interest and other-care within norms that have been announced in the resolution of past conflicts. Communities identify behavior that falls outside these norms as wrongful and can compel the wrongdoer to restore his fellow community member to equal status through symbolic or practical action.90 This achieves justice between the parties, and it signals to those outside the bilateral pair how they can act justly in future interactions.
B. The Mechanics of Relational Cognition: Sensing, Contextualizing, Acting
Person-to-person relationships are the structural building block of community. While relational cooperation is community’s ultimate goal, relational conflict contributes to that goal by providing opportunities to clarify community values. But what are the internal workings of these bilateral relationships? Within conflict encounters, the two participating actors are attempting to reach a prosocial accommodation between self-interest and other-interest. Philosophers, cognitive psychologists, and decision theorists agree that the driving force within these goal-oriented, other-directed interactions is a cognitive process known as dynamic decision-making.91 In a so-called problem space, an actor devises a “generative representation” of “the problem itself, the goal to be accomplished, and the set of actions that [he] will consider in the course of solving the problem.”92 The actor is guided by a “search strategy” to test hypothetical problem-solving actions, and to evaluate the likely success of each to determine whether the desired goal has been reached or requires an alternate problem-solving action.93 Before the actor can identify strategies, he first “perceive[s] certain cues in the environment that . . . trigger . . . associated memories about the best courses of action.”94 As the actor moves through the problem space, “decisions are motivated by [both internal] goals and external events. . . . [They] . . .are made based on experience,” “but can be modified by incoming information.”95
The dynamic decision-making that supports prosocial behavior is made possible by a distinctly human cognitive sequence that features “perception, learning, memory, logical reasoning and problem solving.”96 In other words, each goal-oriented actor “cross[ing] paths and shar[ing] space”97 with a goal-oriented other must perceive the world and the other with his senses, put those sensory perceptions into a relational context, and act on those contextualized understandings of the world to move toward his own goal without unduly inhibiting the other. This cognitive sequence—sensing, contextualizing, and acting—is discussed below.
1. Sensing
In both physical and human interactions, the actor’s choice repertoire arises from his knowledge about the functioning of the things and people around him. On the most basic level, antisociality or aggression can be understood as a crossing of relevant boundaries, be they the boundaries around an other’s body, property, or emotional well-being.98 Consequently, an actor’s relational behavior begins by identifying other people and things in his environment through use of his senses.
When an actor’s brain uptakes sensory stimuli, the actor recognizes “space located around or near the body” of another, which is a precondition to making choices about how to interact with that other’s body or property.99 The “passive inflow from the environment of information . . . is used to make and process representations of objects and events.”100
While the goal of perceiving the other and the world is to facilitate coordination, uptaking sensory input is itself a fairly self-contained endeavor.101 One sees, hears, tastes, and smells using one’s own physical capacities, without any reliance on or cooperation with others. That is to say, although sensing one’s environment initiates relation, it is not a highly relational phase of the action sequence.102
2. Contextualizing
In the second phase of the cognitive sequence, the brain places perceived objects and people into physical and social context. Once “sensory organs deliver up information” to an actor, he “constru[es], plan[s], motivate[es], and regulat[es]” behavior in response to that information.103 Human beings create and use “functional consciousness” to process information and generate a menu of action options.104 Much of this process takes place “tacitly.”105 In fact, people are perpetually toggling between “‘focal awareness’ of an integrated whole (something [they] are conscious of knowing at any given instant) and the subconscious use of ‘subsidiary awareness’ that is learned in the course of experience.”106 One cognitive theorist has described the task of understanding a problem space as inherently “naturalistic.”107 Others, who study so-called embodied cognition, have identified contextualization as a motor process:
The body is especially adept at alerting us to patterns of events and experience, patterns that are too complex to be held in the conscious mind. When a scenario we encountered before crops up again, the body gives us a nudge: communicating with a shiver or a sigh, a quickening of the breath or a tensing of the muscles. Those who are attuned to such cues can use them to make more-informed decisions.108
Notably, when a human being interacts with a “complex physical environment,” routinized, correlative thinking about previous interactions reduces the need to uptake or process new information.109 In contrast, when a human being encounters other human beings in a social environment, a different and more effortful cognitive architecture is activated.110 Here, human actors must make “prediction[s] drawn from perceptions of other persons’ intentions, motives, and . . . common sense representations of human capabilities, together with knowledge of accepted practice.”111 In other words, human actors cannot rely on input-output correlations to assess the goals, preferences, or behavior of other human beings. Instead, actors intuit the motivations and likely behavior of others using “a variety of heuristics.”112 These may include information about an other’s age, race, and gender and the predictive inferences drawn from such information.113 They may also intuit using more individualized information about the other, such as organic verbal communication during an encounter, or “social acts” like facial expressions and gestures.114 Finally, actors may draw on their understanding of social custom to assign localized meaning to these traits and actions.115
Unlike sensing, which is a self-directed and self-contained task, cognitive theorists have described contextualizing in distinctly relational terms. They have suggested that actors are only able to intuit and ascribe intentions to others because they share humanity—the experience of inhabiting a body and a social world. As neuroscientist Walter Freeman has said, “[K]nowledge is intrinsically social. It is embedded in the particular culture in and by which a group of humans live.”116
This means that even without being able to fully penetrate the other’s interior life, an actor can “understand the [other’s] lifeworld” because the actor is “also in the world.”117 Contextualization leverages humanity to imagine as accurately as possible the motivations and goals of an other. When he contextualizes, the actor concentrates his attentional energy away from himself. He pivots from the habituated correlation that is adequate to process the physical world toward the more effortful and personalized cognition that is needed to process the relational world.118
Contextualizing is essential to devising an action plan that attempts to balance self-realization with other-care. Actors and others experience the world mutually, each member of a relational pair symbiotically attempting to achieve personal satisfaction without oppressing the other. Communities achieve solidarity in part by demanding that members respect others’ agency and goals through “situation recognition”119 in a given problem space. Without thoroughly and personally contextualizing sensory information about an other to infer his goals and preferences, the actor cannot hope to deliver other-care when he carries out the final step in the behavioral sequence.
3. Acting
The behavioral sequence culminates in an action phase. Here, the actor evaluates a variety of possible behaviors and chooses a provisional course of action designed to further his goal.
First, the actor judges his own “capabilities, goal aspirations, [and] outcome expectations.”120 After generating action plans that serve his self-interest, the actor asks whether those plans are feasible in light of “opportunity structures and impediments” in the existing environment.121 Chief among these opportunities and impediments are the other community members in the problem space. For example, if a driver perceives a child at play near an intersection with a stop light, competent contextualization might suggest that the child is liable to enter the intersection and is vulnerable to a collision with the car. The child is, in other words, an impediment to the driver’s goal of crossing the intersection while the light is green. This context may suggest two action plans to the driver: slow down and miss the green light or honk to signal the child to remain on the sidewalk as he passes. Similarly, if a driver wishing to change lanes intuits that the driver in the next lane is aggressive, he may consider that driver an impediment to the lane change and can respond by staying in his lane or by intensifying his own aggression to execute the lane change.122
Here, as in the sensory phase, the actor’s energies are self-focused and self-contained. Yes, he is using the product of the other-directed contextualization phase to assess external opportunities and impediments. But he is doing so primarily to test the appeal of various strategies for pursuing his own goal.123
Importantly, as individuals execute action plans in dynamic person-to-person interactions, “decisions . . . can be modified by incoming information.”124 So initiating action does not terminate the cognitive tasks of sensing and perceiving. On the contrary, the actor’s success in navigating the problem space depends on his ability to sense and contextualize while he acts. Only by “monitor[ing] and analyz[ing] how well . . . [his] actions have served [him]” can he “change . . . strategies accordingly.”125 If, for example, an apparently accommodating driver in the next lane honks angrily when a driver begins to cut him off, the lane-changing driver may abort that attempt.
In sum, human beings interact in the world through a neurocognitive sequence of “sensing, contextualizing, and acting.” All three phases are essential to the production of care within bilateral relationships. But contextualization demands an actor’s relational energy in a way that sensing and acting do not.126 The actor brings to the contextualization phase of cognition tacit, often-embodied knowledge about the lifeworld of other people; this allows him to intuitively ascribe imagined meaning and value to the other’s life and goals. Contextualization is indispensable to the actor’s success in devising an action plan that simultaneously actualizes the self and honors the other. While actors engaging with the physical world can contextualize using mechanical correlation or category-level thinking, actors engaging with the human world cannot.127 Coequal subjects in the relational space all inhabit complex lifeworlds and nurse idiosyncratic goals.128 The interpersonal contextualization needed to produce careful action in the social space requires tacit knowledge acquired over a lifetime and grounded in experience.129 It is an inherently humanistic task, and one that may be described as “relational labor.”
C. Relational Cognition in Tort Doctrine
Tort architecture mimics the community-constructing process by treating bilateral relationships as the site where care is produced or neglected and by using one-on-one disputes to unearth community conceptions of wrong.130 Tort doctrine surfaces the steps in the cognitive process that produce care and formalizes the community expectation that members will follow those steps to provide coequal group members relational respect.131 Failures to uptake sensory information, assign it social context, and execute other-regarding action plans can all thwart caregiving. The specifics of tort doctrine say as much.132
1. Sensing
The Restatement sets forth an expectation that actors display baseline competence in perceiving relevant parts of their environment as they contemplate a course of action that could have an effect on that environment. This duty is reflected in Section 283C of the Restatement (Second) of Torts, which holds even physically challenged actors to an expectation of reasonable behavior.133 This rule obliges people who are aware of their own physical limitations to use supplemental measures to obtain accurate information about the things and people in their immediate physical space.134
The Restatement recognizes that perceiving things and others in one’s environment is crucial to pursuing self-interest while delivering other-care; one may fail in the care imperative by using inadequate sensory mechanisms. Of course, it is rare for an actor to entirely lack sensory faculties or to wholly reject supplemental sensory mechanisms, so it is rare that actors are found wrongful for step-one deficiencies.
2. Contextualizing
Tort doctrine also expects the actor to assign social significance to the sensory information he uptakes. This expectation is articulated in Restatement (Second) Section 290, which demands that actors have and use knowledge about the world in order to assess when self-interested behavior may interfere with the goals of an other.135 Specifically, the Restatement charges actors with knowing “the qualities and habits of human beings and animals and the qualities, characteristics, and capacities of things and forces in so far as they are matters of common knowledge at the time and in the community.”136
This rule amounts to a doctrinal requirement that actors have and use “tacit knowledge.” It nods to the fact that subconscious knowledge about the physical and relational world informs and narrows the range of action options that can further self-interest without invading other-interest. In fact, without specifically using cognitive jargon, the Restatement surfaces the inexorable link between contextualization and relationally responsible action. Actors are obliged to
know the ordinary operation of natural forces in the locality in which he lives which are likely to be affected by his conduct. Thus, a man living in the northern part of Minnesota is required to expect extremely cold temperature in early winter. A man living in the tropics is required to expect hot weather, even in winter.137
“Every man should realize that heavy rainstorms are likely to produce floods in mountain streams” and should be familiar with “the ordinary operation of well-known natural laws. . . . [including] the poisonous qualities of many drugs, chemicals, and gases and the explosive or inflammable qualities of many chemical compounds and the intoxicating quality of certain liquids.”138
Further, actors must recognize the “traits of particular classes,”139 like children, whose “inexperience and immaturity”140 are part of the context that influences action options.
The Restatement demands that actors competently contextualize because the quality of context dictates the quality of action. Absent tacit knowledge of the physical and relational world, actors are unable to predict how their conduct will interact with background forces to affect the bodies and property of others.141 So, actors contemplating speedy driving in a snowy Minnesota winter are held to the tacit knowledge that tires may skid out of the driver’s control and lead the car to collide with nearby cars or pedestrians. Actors planning to light a cigarette in a gas station or garage are held to the tacit knowledge that spilled gasoline or oil might catch fire and burn nearby property or people. Actors who see children playing on the sidewalk are held to the tacit knowledge that youngsters might dart into the street. In each situation, the actor’s tacit knowledge allows him to assign significance to incoming information—about his tires’ grip on the road or the action of the children at play—and to calibrate his action in real time to maintain self-interest (continue moving forward on the road) while delivering other-care (slowly enough to brake for children).
Not only does the Restatement explicitly demand competent contextualization, but it also integrates that expectation into several ostensibly unrelated rules. For example, the “last clear chance” doctrine suspends the contributory negligence bar for an inattentive plaintiff if the defendant has, but squanders, an opportunity to uptake and contextualize information that should prompt real-time adjustments to the plaintiff’s faulty behavior.142 So, if a speeding driver observes a pedestrian stopped in the middle of the road, he may initially maintain speed, anticipating that a careful pedestrian would hear the car and jog ahead. But if, when the driver gets closer, he sees that the pedestrian is gesturing to his shoe stuck in a grate, the driver is obliged to take in that communicative message, contextualize the pedestrian’s predicament, and slow or stop accordingly. The last clear chance rule assigns legal consequences to the defendant’s failure to contextualize that he has encountered a careless other and to make real-time adaptations that are able to minimize harm to that other.
Tort further foregrounds contextualization in rules that encourage actors to communicate with behavioral counterparts before taking action. For example, property owners may use reasonable force to “terminate another’s intrusion upon the actor’s land or chattels,” but the privilege is conditioned on “first request[ing] the other to desist.”143 This rule incentivizes an actor pursuing his self-interest (in property possession) to get more context about an ostensibly trespassing other, whose interest (in physical security) might be inhibited by the use of force. When the actor contemplating force makes a request of the other, that request prompts a response, and each party is able to share knowledge and clarify their respective understandings of the situation. When they contextualize, the Restatement intimates, both interests can be mutually served:
The necessity of a request comes from the fact that in many cases the intruder mistakenly believes that he has a right or privilege to intrude or that he has a license from the possessor or from someone whom he mistakenly believes to be the possessor or, although he knows that he has no right, privilege, or license, he believes that the possessor will not object to his intrusion. In these cases a request that he cease his intrusion, or even a mere warning that his intrusion is objectionable, is likely to be sufficient to cause him to desist from his attempt to intrude or to terminate his intrusion, and the use of physical force no matter how slight is not necessary until a request has been made and disregarded.144
The Restatement also foregrounds contextualization in its explanation of the prohibition on spring guns and other mechanical devices to repel property intruders:
Even though the conduct of the intruder is such as would have justified the actor in mistakenly believing the intrusion to be [unconsented], there is the chance that the actor, if present in person, would realize the other’s situation. An intruder whose intrusion is not of this [unconsented] character is entitled to the chance of safety arising from the presence of a human being capable of judgment.145
The doctrinal requirement that property owners exercise “human judgment” toward intruders signals that the provision of care—often thought to be concentrated in the “action” phase of the cognitive sequence—actually originates with effective contextualization.146
3. Acting
Tort doctrine begins by obliging actors to proceed based on “attention, perception of the circumstances, memory, knowledge of other pertinent matters, intelligence, and judgment.”147 But once the actor has assessed risk by contextualizing sensory information, the Restatement frees him to strike whatever equilibrium between self-interest and other-care he thinks appropriate. There is, according to the Restatement (Second), “rarely an absolute duty to secure the other’s protection,” only a suggestion to adjust one’s “own affairs” when the “other’s danger” warrants it.148 The Restatement (Third) supersedes this language, but offers a similarly modest approach to conduct directives, explaining that “[w]hile negligence law is concerned with social interests, courts regularly consider private interests . . . because the general public good is promoted by the protection and advancement of private interests.”149
This light touch makes sense when tort is viewed through the prism of community construction. The Restatement suggests that self-interested actions are acceptable when they advance social value.150 But it does not impose social values onto the community as the officials of a political community might. Rather, tort doctrine empowers members of the jury representing the sociological community to determine what behavior is considered locally valuable. This is the only way the resulting jury verdict can signal community-specific justice norms.151 So while actors are required to contextualize competently, they are permitted to test various responsive action plans. Their completed action plans may generate disputes with others, but the community welcomes these clashes from time to time as opportunities to update their justice norms.152
In sum, the rules of tort align with the byways of sociological community. Both tort rules and community practice seek to maximize prosocial coordination without direct state intervention. Both foreground person-to-person relationships as the relevant unit of coordination. Within each relational pair, actors are permitted to pursue self-interested goals but are required to use their tacit knowledge to assess how that pursuit may invade the goals of others. And within each pair, actors are expected to make dynamic, real-time decisions that balance self-interest and other-regard. In other words, what tort doctrine demands of juridical actors is no different than what sociological communities demand of their members: relational coordination reflecting shared notions of value. The coordinative unit of community—and of tort—is the bilateral relationship between actor and other.
III. Synthetic Relationships as Tort Wrongs
Justice theorists contend that the goal of tort is prosocial coordination, and that bilateral relationships are the natural unit of coordination and care. This Part suggests that once bilaterality is understood as the precondition for relational care, the stage is set for distinguishing between “ordinary” dangers properly subject to a negligence rule and “extraordinary” dangers properly held to a strict rule. When two actors occupy the same problem space, they bring tacit knowledge and humanistic context that enables them to assess how their pursuit of self-interest may harm the other. This insight typically prompts them to moderate self-interest and avoid other-harm. The actor can adapt the risks he takes in pursuit of his self-interest as new information arises. His goal pursuit may pose a risk of danger, but his ability to make injury-avoiding adjustments renders that danger ordinary and normal. If he squanders opportunities to avoid injury within a bilateral pairing by incompetently sensing, contextualizing, or acting, negligence liability is warranted.
In contrast, when an actor deploys an inanimate, care-insensitive delegate to replicate his side of an interpersonal relationship, that delegate stands between the actor and the other. By delegating his care obligations, the actor has created a trilateral, “synthetic” relationship mediated by a nonhuman actor. Within this relationship, the actor’s interest is being pursued by a delegate unable to put new information into human context or make real-time action modifications in response to that human context. The actor’s choice to remove human cognition from the problem space has replaced authentic care with a counterfeit version. Once the actor cedes control of the problem space to his inanimate delegate, he can do nothing to reverse the delegate’s action process, even when other-harm is imminent. The actor’s inability to adjust the instrumentality’s course to avoid injury makes the danger associated with the delegation extraordinary and abnormal. Creating trilateral relationships that give synthetic care is a genuine relational wrong even when the inanimate delegate is able to accomplish the actor’s ends efficiently and safely in the majority of problem spaces. When the actor renounces the caretaking opportunities he would have had in a bilateral relationship to a nonhuman delegate in a trilateral relationship and injury results, strict liability is warranted. In sum, sorting the world into bilateral and trilateral relationships is a principled way of distinguishing between activities that should be governed by a fault rule and those that should be governed by a no-fault rule. It achieves interpersonal justice by measuring fault in relational terms without asking instrumental questions about whether a liability assignment will serve an exuberant economy, automated relationships, or any other policy goal external to tort.153
A. The Bilateral Baseline
Bilaterality has been identified by philosophers as the necessary framework for delivering the “mutual respect” between community members that is the “basic term[] of voluntary human association.”154 All members of a community are assumed to be agents; their actions reflect the natural endowment to take risks in their own interest and to exercise care toward community counterparts.155 Condemning the misuse of that endowment is a way of doing community justice.
Because tort’s purpose is to construct community, its formal structure begins with the same bilaterality upon which informal communities are built.156 Bilaterality is the essential architecture of intentional tort and negligence. Tort assumes that within any relational pair, each actor’s cognitive endowment enables him to devise an action plan that strikes some balance between self-interest and other-regard. Failing the cognitive task at any point is a relational wrong. When an actor’s design is to use perception and context to select action sure to injure, tort categorizes his behavior as “culpable intention” and the community is invited to condemn it as an intentional tort.157 When an actor neglects to sense, contextualize, or act in a sufficiently other-respecting fashion, tort categorizes his behavior as “culpable inadvertence”158 and the community is invited to condemn it as negligence.
B. The Trilateral Counterfeit
Though bilateral relationships have historically been the cornerstone of community building and tort doctrine, they are not the only relational configuration known to the group. Community members may also pursue self-interest by delegating a task to a nonhuman instrumentality. These delegations create trilateral relationships involving the actor, the instrumentality, and the other. Typically, throughout history, actors have selected as relational delegates inanimate instrumentalities with scientific properties that achieve the actor’s goal quickly, cheaply, and safely in most cases. So, the actor who delegates is making a rational choice to save time or labor with generally good results. However, these delegations remain problematic because the intermediaries lack the human capacity to perceive the world at large, contextualize emergent variables, or undertake a careful action program in real time. They may not injure in a typical use case, but they cannot avoid injuring in atypical use cases.
Take, for example, explosives, which have long been subject to a strict liability rule. Their physical properties dictate that they will respond to external shocks by rapidly decomposing and producing gas and heat.159 This happens regardless of the context in which the shock is experienced. So, if explosives are activated to demolish a building, the gas they release would destroy the building quickly and with minimal labor. But that same gas lacks the cognitive endowment to realize when it is moving outside the target building and toward vulnerable people nearby. It also lacks the dynamic decision-making ability to change course, reduce velocity, or cool down to avoid injuring those people.
The actor who fully delegates a self-interested task to a nonhuman intermediary is consciously renouncing his opportunity to perceive the other, put him into social context, and use that context to modify his action program in real time. By creating a trilateral relationship—actor, intermediary, other—he has replaced authentic human care with a synthetic version delivered by his nonhuman delegate.160
These trilateral relationships are simply not amenable to tort rules premised on bilaterality. The intentional tort and negligence analyses are asking whether the actor’s cognitive attention to others was malign or deficient. That question has no purchase in a trilateral relationship because the actor who outsources labor to a nonhuman third party is not attempting to give cognitive attention to individual others in specific problem spaces.
To be sure, most actors who delegate relational labor to inanimate instrumentalities have assured themselves that the delegate is likely to achieve the desired goal without causing injury across multiple similar problem spaces. So, the actor has used his cognition to select an instrumentality that is reasonable as a categorical matter. At the same time, he has selected the instrumentality with full awareness that it is not capable of human cognition and that its properties are irreversible once activated. Because the purpose of tort is to facilitate human caregiving as the basis for community construction, delegating care obligations to nonhumans is incompatible with tort notions of justice.
Enter strict liability. Trilateral relationships thwart human caregiving, so their very creation is a relational wrong. The actor has chosen his nonhuman instrumentality for intrinsic properties that make it good at achieving the actor’s goal, but also unable to sense impending injury or respond to it. When risks arise, they cannot be managed. This creates extraordinary danger. Significantly, this extraordinary danger results from the actor’s unilateral decision to replace bilateral care with a trilateral counterfeit. All of the benefits of a bilateral relationship—the ability to intuit information about the other, the ability to negotiate a mutually beneficial exchange of care for risk in real time, and the symbolic treatment of the relational counterpart as a juridical equal entitled to human care—have been taken from the relational counterpart without his consent. The actor has unilaterally prioritized his interests above those of other community members. That is the essence of wrong within tort: a statement by the defendant that “I am here up high and you are there down below.”161
So, justice theories of tort need not expel or apologize for strict liability, because it is a necessary rule for a distinct kind of relational wrong. Outsourcing care to non-human intermediaries, even when reasonable, puts asunder the bilaterality necessary to community. When that outsourcing produces an injury, liability must follow.
C. Trilaterality in Case Law
Over the past century, instrumental theorists of tort have justified strict liability as a product of policy preferences and have shrunk the scope of strict liability as policy preferences have evolved. But it is just as plausible to explain strict liability using a justice theory anchored by a trilateral model of wrongdoing. Most activities that have been diverted to the strict liability sector throughout the past century of tort have displayed the trilateral signature that suggests a short-circuit of human care. So, grounding strict liability in justice theory immunizes this sector from policy caprice while leaving accepted case law intact. Moreover, understanding the link between synthetic relationships and wronging makes sense of strict liability’s enduring fear of the “non-natural.”
The First Restatement defined the strict category to include activities “necessarily involv[ing] a risk of serious harm . . . which cannot be eliminated by the exercise of the utmost care.”162 The category was said to include storage and transportation of explosive substances, blasting, and oil drilling.163 The Second Restatement, published in 1964, offered a similar list of activities considered “abnormally” dangerous and ripe for strict treatment: blasting, the use of explosives, water, flammable liquids, poisonous gas or dust, oil wells and refineries, and production of atomic energy.164 The Reporters of the Third Restatement identified essentially the same core group of undertakings as extraordinarily dangerous.165
From 1934 to the present, the Institute has said that these activities are extraordinarily dangerous for instrumental policy reasons. But trilaterality explains these outcomes equally well. In each activity, the defendant deploys an inanimate substance or process with properties that enable it to quickly or cheaply replicate work the defendant would ordinarily do himself. If this labor were carried out naturally by a human actor, it would be taxing and costly but would embed opportunities for individualized caregiving. The properties that make these instrumentalities shortcuts also make them insensitive to risk, resistant to human control, and therefore incapable of individualized caregiving.
For example, water, because of its fluidity, weight, and gravitational properties, may be used as a source of cheap hydropower. At the same time, it is an instrumentality that cannot itself identify or respond to risk and cannot be fully controlled once it escapes containment. This is the underlying justification for assigning a compensatory obligation to the owner of an overflowing reservoir who was not personally careless in having the mechanism constructed, but who nevertheless knowingly harnessed mass quantities of uncontrollable water liable to escape and do harm, as in Rylands v. Fletcher.166 Explosives, too, because of their ability to produce propulsive gas, can cheaply and quickly demolish a building or level natural rock formations. But because of their chemical composition, detonation that results from heat, friction, or impact cannot be reversed, and once a human actor unleashes them, they act in accord with nonhuman imperatives that resist human control.167 For example, a federal court in 1927 held strictly liable a construction company whose careful use of dynamite to “blast[] away solid rock from the side of a mountain” to build a public highway repeatedly launched rock onto the tracks of a nearby railroad.168 The court held that the blasting operation was “intrinsically dangerous” and as a result that the blaster was “liable for the damage done . . . quite irrespective of the question of . . . negligence.”169
A trilateral model of strict liability offers a wrong-based principle for singling these activities out as extraordinarily dangerous. Further, it equips tort to assess the danger posed by emerging technologies without turning to the whims of market or culture.170
D. Trilaterality and the Natural World
Aside from explaining the relational wrong that justifies strict liability, the concept of trilaterality also illuminates the concept of “non-naturalness” that courts often allude to when applying a no-fault rule to a given undertaking. Lord Cairns famously introduced the idea that non-natural behaviors required strict treatment in the bursting-reservoir case Rylands v. Fletcher.171 But while many courts have subsequently associated strict liability with non-naturalness, they have never reduced the test for “non-naturalness” to determinate form.172 Trilaterality may hold the key. If tort’s goal is community construction, and its unit of construction is bilateral person-to-person interaction, then using nonhuman instrumentalities to replace human care violates the natural framework for giving meaning to tort rights. But using nonhuman instrumentalities to assist human care honors that natural framework. Complete delegations of care to nonhuman intermediaries is, on this theory, non-natural.
This understanding of non-naturalness is especially helpful when courts wish to assess the wrongfulness of unfamiliar technologies instead of approaching technological liability from a policy perspective. The introduction of automobiles to the marketplace in the first decade of the twentieth century illustrates the power of trilaterality to address technological innovations. Cars came on the scene when tort was still considered a law of wrongs rather than an instrument of policy. Courts were asked early on to adjudicate disputes over liability for injuries arising from this alarming new technology. At the time, many observers who did not understand combustion engines condemned them as “devil-wagons,” which seemed to navigate unbidden by the human hand.173
In a series of early cases, plaintiffs argued that cars were so otherworldly that they should be “placed in the category with the locomotive, ferocious animals, dynamite, and other dangerous contrivances and agencies.”174 These arguments were emphatically rejected. In each case, judges employed something very similar to the bilateral-trilateral distinction to conclude that cars were ordinarily dangerous. They repeatedly observed that owners retained complete control over vehicle operation. For example, the Court of Appeals of Georgia explained that “[i]t is not the ferocity of [the] automobile[] that is to be feared, but the ferocity of those who drive them. Until human agency interferes, they are usually harmless.”175 Judges implicitly considered drivers of analog cars to be in natural, bilateral relationships with others on the road, and to have adequate care opportunities. They were therefore able to categorize the automobile as a fault-based instrumentality despite cultural fear of it. This historical example showcases the elegant logic that a bilateral-trilateral distinction brings to emerging technologies.176 Notably, the bilateral-trilateral principle can also guide courts to the conclusion that some new technologies are extraordinarily dangerous. For example, the California Supreme Court in 1948 classified the use of cyanide gas to exterminate vermin as extraordinarily dangerous and subject to a no-fault rule, after observing that it was “lethal to human beings” in small quantities, “lighter-than-air,” and so “very penetrative” that it would virtually always leak through barriers, no matter how scrupulously the exterminator tried to confine it.177 It did not explicitly designate the exterminator’s use of the gas as “trilaterality,” but it alluded to bilateral-trilateral principles when it suggested the gas posed a qualitatively different danger than cars, which can be “careful[ly] operat[ed].”178
IV. Synthetic Relationships and Artificial Intelligence
Exposing trilaterality as the wrong that justifies strict liability can jolt the no-fault sector to life just in time to address the emerging problem of injury-by-algorithm. Many tort theorists contemplating how liability for AI-inflicted injuries should be assessed have taken instrumentalism as their starting point. That is, they assume that some degree of AI adoption is the ultimate good tort should facilitate, and that relational expectations must be adjusted to produce that good. Depending on their policy priors, these theorists have proposed a variety of categorical AI regimes, ranging from fault-based liability to products liability, or a new kind of liability tailored specifically to AI.179 This Part advocates the opposite approach. Relational coordination is the ultimate good tort should facilitate, and AI liability should be assigned by asking how particular uses drive or impede that good. On a justice theory of tort, the bilateral-trilateral principle of relational coordination sorts particular deployments of artificially intelligent delegates into those where authentic human care is possible and those where it is impossible. It determines the appropriate liability rule accordingly, with the former governed by negligence and the latter governed by strict liability. Unlike instrumental, AI-subsidizing regimes, a justice approach to AI is a more objective and community-empowering response to the automated relationships that are coming to dominate modern life. This Part describes how AI attempts to reproduce human cognition and human care and demonstrates why those attempts are bound to fail in a number of cases. It then turns to three commercial and institutional uses of AI—autonomous vehicles, robot journalism, and facial recognition technology—sketching the cognitive architecture and injury records of each. Using this background, it shows how a justice theory of strict liability would operate to “preserve . . . human values” in these and other AI undertakings.180 Courts would first isolate the relational configuration of a given AI application as bilateral or trilateral. They would then toggle between a negligence rule for defendants who retain control of the technology and a strict rule for defendants who renounce personal control, and they would assign liability accordingly.
A. Artificial Intelligence: Its Potential and Its Limits
AI is “the field of computer science dedicated to developing machines that will be able to mimic and perform the same tasks just as a human would.”181 Scholars who describe human thinking as “nothing more than a mechanical manipulation of symbols” realized some time ago that computers were capable of this manipulation, and they concluded that those computers could be programmed to acquire and apply knowledge “in order to change [the] environment” as humans would.182 Today, AI is defined as a system that can “correctly interpret external data, . . . learn from such data, and . . . use those [learnings] to achieve specific goals and tasks through flexible adaptation.”183 The goal of AI is to produce a “mechanized, simplified version” of human cognition.184 As a result, some have observed a “strong connection between the human psychological functioning and . . . AI.”185
Given the “strong connection” between human psychological operation and AI, it is unsurprising that AI processes try to map on to human processes.186 Today’s sophisticated AI integrates machine learning, in which “repeated exposures . . . to an information-rich environment gradually produce, expand, enhance, or reinforce that system’s behavioral and cognitive competence in that environment or relevantly similar ones.”187 Each system uses neural networks involving three sets of “nodes.” A set of “input nodes” acquires and translates source data into representational form.188 This node layer uptakes information about the exterior world and is designed to replicate human “sensing.” A set of “output nodes” is programmed to take “control actions” consistent with a menu of desired results.189 The output nodes are designed to replicate human “action.” Finally, the input and output nodes are mediated by interim “layers” of nodes. These mediating layers extract the sensory information inputs that correlate with the preferred action outputs.190 The links between input and output nodes are assigned numerical weights, and learning algorithms can adapt these connective links over time. Learning algorithms “train” the network to adjust to new information “in such a way that the relationship between input and output layers is optimized.”191 The intermediate layers of nodes that identify the external information most strongly linked with good outcomes are designed to replicate human “contextualization.”
The ability of AI to approximate the human cognitive sequence is remarkable. But approximation is not replication, and AI contextualization is far inferior to human contextualization. Humans contextualize using tacit knowledge derived from daily immersion in community norms and the capacity to imagine the lifeworld of an other,192 so AI will always struggle to contextualize as a human would.193 Of course, many AI theorists hope that by feeding enormous and wide‑ranging datasets into algorithms, neural networks will eventually be able to produce a synthetic version of tacit knowledge. But even these efforts to program a more nuanced model of the “problem space” in which human beings relate194 would still remain inferior because algorithms produce decision outputs through correlation.195 Correlation can “duplicate a human activity,” but “it often turns out that . . . [the activity being duplicated has been] seriously simplified and distorted.”196 “[H]uman thinking is basically not algorithmic,” so there are inherent limits to algorithmic reproduction of that thinking.197
The limits of AI emerge most sharply in the arena of social interaction. As one scholar has observed, “[W]e are bodily and social beings, living in a material and social world,” and “we can understand [how other people interact with material and society] because we are also in the world.”198 Computers, in contrast, “are not in our world.”199 For that reason, even programs that have used neural networks to develop synthetic tacit knowledge using enormous datasets can only ascribe meaning to that data using limited, programmed models.200 The models “lack[] flexibility, and [are] not able to adapt to changes in the environment” the way that a human being would in real time.201 The “problem space” that computers are programmed to navigate is a limited and streamlined construct, whereas the problem spaces that human beings have learned to navigate over a lifetime are natural and idiosyncratic. Consequently, programmed tacit knowledge is simply not the same as human tacit knowledge.
B. Artificial Intelligence: Notable Applications
Corporations and governments have begun to develop, use, and sell technologies that replace human labor with artificial intelligence; autonomous vehicles (AVs), robot journalism, and facial recognition technologies are leading the way. Each of these innovations is designed to mimic the three‑step “sense, contextualize, act” cognitive sequence that produces human relational behavior. But recent experience with each confirms that AI contextualization does not replicate human contextualization. If the AI cognitive phase that identifies relational risks in real time is synthetic and inflexible, AI outputs are bound to fail when unexpected conditions arise. Sure enough, all three technologies have produced profound relational injuries, including death, defamation, and racially biased false arrest, and all of these injuries can be traced to deficiencies in AI contextualization.
1. Autonomous Vehicles
As early as 1980, researchers were designing cars that could be driven with minimal human direction.202 Today, engineers have devised a taxonomy to describe the various levels of AV sophistication, ranging from Level 0 cars, which employ no automation and are fully controlled by the driver, up to Level 5 cars, which are fully automated and perform all driver functions.203 Although manufacturers originally expected Level 5 cars to be market‑ready by 2020, they have revised that goal toward production of Level 4 cars in the short‑term.204 One reason for the adjusted timeline is the realization that AVs operate in a profoundly unpredictable environment, making it difficult to write automated driving programs that respond to all environmental variables. “While the technology may be able to handle (hypothetically) 90% of all use cases (examples being kids running after balls in the street, hail storms, and construction zones),” one expert explained, “there are always that many more exceptions.”205
To fully understand the technological challenges involved in designing highly automated Levels 3, 4, and 5 cars, it is helpful to describe the systems involved in vehicle automation. AVs essentially require three components, each of which mimics functions now performed by human drivers. First, AVs need a system that perceives the environment through which the car is moving. Today, this function is typically performed by a Light Detection and Ranging system (LIDAR).206 LIDAR transmits light pulses that travel to nearby objects and “backscatter” to a receiver that creates a model of the surrounding environment.207 LIDAR mimics the human cognitive phase of sensing.
AVs then assign social meaning to the LIDAR sensory models. Here, engineers use deep learning.208 Just as human biological neurons send syntactical messages to other neurons to establish neural networks that connect informational inputs relevant to good action outcomes, deep-learning engineers create artificial logic gates that receive and map data to produce a programmed output value.209 Engineers can layer multiple artificial neural networks, each capable of generalizing and sharing generalizations about LIDAR data, to make predictions about the environment.210 For example, neural networks allow the car to determine “whether an object [it encounters] is a tumbleweed or a human.”211 These neural networks mimic the human cognitive phase of contextualizing.
Finally, a third AV system mimics human decision-making in response to environmental cues. Here, engineers integrate reinforcement learning. Reinforcement learning tries to capture how a human being who encounters an opportunity or impediment in the relevant problem space would act in response.212 The goal of reinforcement learning is to develop an optimal “policy” that allows a technology mimicking human behavior to automatically choose which of several responsive actions will maximize some chosen value, including but not limited to “collision avoidance, driver safety, [and] efficiency.”213 Desirable actions are identified by measuring the values of given pairs of environmental states and possible actions over time.214 Researchers have acknowledged that computing the exact value of every possible state‑action pair may be “computationally infeasible.”215 Consequently, they often use approximation models that combine deep neural networks with learning algorithms to “generalize from collected data of past experiences.”216 These systems “create[] corrective mechanisms to improve learned autonomous behavior.”217 Level 4 and 5 autonomous cars are programmed to execute the driving task without human participation or oversight.218 Level 2 and 3 cars use AI to assist the driver but remain subject to human control and override at all times.219
AV advocates have predicted that these vehicles will be safer in the aggregate than their human‑driven counterparts.220 Indeed, confidence in the overall safety gains that AVs promise is part of what has driven some tort instrumentalists to seek a rule that will subsidize their adoption.221 But increased safety is not the same as complete safety; even in AVs’ limited rollout on public roads, injuries have already arisen.222 The available evidence suggests that contextualization deficiencies in AV systems play an outsized role in these failures.223 The best available data on fully automated AVs comes from Uber’s Advanced Technologies Group, which put a fleet of test AVs on the ground several years ago.224 As of 2018, these ride‑share AVs “performed all driving tasks, including changing lanes, overtaking slow‑moving or stopped vehicles, turning, and stopping at traffic lights and stop signs.”225 Although the fleet incorporated technology to fully automate the driving task, Uber also employed human operators to sit in the cars, oversee their operation, and take personal control if needed in emergencies.226 According to the company’s records, from September 2016 to March 2018, Uber AVs were involved in thirty-eight crashes and other incidents.227 In two of these incidents, the Uber AV was the “striking vehicle.”228
The most serious of these strikes involved an Uber AV in Arizona that killed a woman jaywalking across a four‑lane roadway while pushing a bicycle.229 Unsurprisingly, when the National Transportation Safety Board (NTSB) investigated, it traced the crash to deficiencies in the car’s automated contextualization program. The car’s LIDAR accurately sensed an impediment in the roadway. It was supposed to use that sensory information to classify the impediment, determine the “typical goal” of such an impediment, and predict whether pursuing that goal would bring that impediment into the car’s path.230 However, the system was unable to classify the impediment. It “did not have the functionality to anticipate pedestrians crossing midblock outside a marked crosswalk,”231 so it did not classify the object as a person who could be injured. In the 5.6 seconds leading up to the crash, the program classified the woman as a vehicle, an unidentified “other,” a vehicle, a different vehicle, an “other,” a bicycle, a different bicycle, an “other,” a bicycle, and a different bicycle.232 The car did not predict that the impediment would be in its path until 1.5 seconds before the collision.233
Once the car predicted the impediment would cross its path, it shifted to step three of the cognitive process, selecting a “motion plan”—the automated equivalent of the human action phase; this cognitive phase was also problematic. Uber had programmed the car to respond to perceived roadway impediments by suppressing automated steering and breaking for one second to avoid overreactions to false alarms.234 When the one-second suppression ended, the car was programmed to execute an extreme brake that would avoid a collision and to slow gradually if a collision was inevitable.235 Once the car identified a crash as imminent, it sounded an alarm for the human driver and automatically decelerated. At the same time, the driver manually applied the brakes.236 However, neither action was able to prevent the collision, and the woman died.
2. Robot Journalism
Artificial intelligence has also begun to replicate human labor in the media sector. Many news organizations equip human journalists with automation that helps them evaluate large datasets and identify trends that merit further coverage.237 But beyond this assistive technology, leading news organizations have started turning over the entire reporting enterprise for some kinds of coverage to machine-learning mechanisms.238 The New York Times, The Washington Post, Los Angeles Times, The Associated Press, and Forbes have all used so‑called bot-journalist stories as part of their content mix.239 For example, in 2014, The Associated Press began using an algorithm created by the company Automated Insights to turn corporate earnings data into narrative reports on companies’ quarterly performance.240 The algorithms have also been used to report on sports events, earthquakes, and election results.241
Like other algorithms replicating human cognition, news algorithms mimic the human sequence of sensing, contextualizing, and acting. Bot journalists uptake information about the external world using sets of “clean, structured, and reliable data.”242 These datasets are fed into algorithms programmed to recognize and extract information that correlates with predetermined categories deemed newsworthy, approximating the human contextualization task. Once the algorithms have pulled seemingly relevant information and sorted it into categories, they link that information with corresponding narrative phrases from a predetermined menu.243 So, for example, when The Associated Press contracted with an outside company to create an algorithm to produce news stories using corporate earnings reports, it asked editors to generate dummy phrases that would describe common occurrences mentioned in these filings. The algorithm then acted a second time by sending the resulting language chains “directly out on the AP wires without human intervention.”244
Advocates of algorithmic news coverage say that automated stories are more accurate than their naturally produced counterparts.245 This claim echoes the promise of AV entrepreneurs that automated cars are safer overall than human-driven cars. But as in the AV sector, robot journalists are not infallible. According to one study, thousands of bot-journalist stories have been corrected after publication.246 One such notable mistake involved a July 2015 Associated Press report about Netflix’s quarterly earnings. The story reported that the company had missed earnings projections and that “the share price had fallen by seventy-one percent since the beginning of the year.”247 In fact, the Netflix share price had doubled during the relevant period, but had undergone a seven-to-one stock split.248 The algorithm had accurately uptaken information about the fluctuating price of the stock, but its neural network was not sophisticated enough to accurately contextualize the meaning of those fluctuating prices. Its deficient contextualization led it to assign an incorrect meaning to the data and to select narrative language matching that misinterpretation. And because the AI action program called for the machine-selected narrative to be posted directly to The Associated Press wire with no human oversight, the algorithm ultimately circulated false and damaging information about the company.249
3. AI Facial Recognition
Market actors are not alone in leveraging AI to cheaply replicate human labor. Public entities increasingly rely on AI facial recognition programs to carry out their various missions. The most controversial use of facial recognition in recent years has been by police departments and other law enforcement agencies.
Facial recognition technology (FRT) follows roughly the same three-step sequence identified in the autonomous vehicle and robot journalism contexts: sensory perception, contextualization, and action execution. Like those who design autonomous vehicles and robot journalists, facial recognition programmers begin by obtaining sensory data about the external world. FRT “enrolls” into a “gallery” photographs of identified faces obtained from government databases, social media, and other sources.250 Each photograph is then reduced to a digital record known as a faceprint, which summarizes unique features like eye distance and mouth shape and labels them with a corresponding name.251
Once a gallery of faceprints has been constructed, FRT users can compare anonymous facial images captured at mission-relevant locations with the identified faceprints in the gallery.252 This comparison mimics human contextualization. A human actor would ordinarily see an other in real time and search his memory to determine whether and how he knows the other’s identity. FRT assigns significance to an anonymous face observed in the real world by comparing it with a set of pre-identified faces and designating them a match when the correlation between unique facial features meets a statistical threshold.253
Like autonomous vehicles and news algorithms, FRT can be used both with and without human oversight. For example, some public housing authorities have begun to use FRT instead of keys to control access to physical buildings.254 The technology automatically denies access to unfamiliar faces, with no real-time opportunity for human intervention. Other FRT users deploy the technology to produce facial matches without programming any automated response to the match. Police departments are increasingly integrating FRT into their investigatory toolkit, but they claim that FRT merely supplements human investigation, with officers retaining control over questioning, arresting, and charging suspects identified through recognition algorithms.255
Like autonomous vehicles and robot journalists, facial recognition algorithms have been known to err. Specifically, “the accuracies of face recognition systems used by US-based law enforcement are systematically lower for people labeled female, Black, or between the ages of 18–30 than for other demographic cohorts.”256 Consequently, some experts have predicted that facial recognition technologies that are used to carry out tasks absent human oversight will routinely injure minorities and women. When FRT is used to control access to physical spaces, these technologies may wrongfully deny access when a program erroneously fails to recognize them. For example, as an increasing number of U.S. airports use FRT as a boarding control measure, its documented “4% false negative rate. . . . means one in 25 people will be told by the machine, ‘sorry, you’re not who you claim to be.’”257 These technologies may also wrongfully stigmatize individuals when a deficient program erroneously recognizes them. For example, in a widely covered incident during the summer of 2020, an FRT misfire led Detroit police to mistakenly arrest a Black man for shoplifting.258
In that case, after Detroit police were alerted to a 2018 shoplifting incident at a local Shinola watch store, a Michigan State Police worker uploaded surveillance video of the event to its facial recognition database.259 The anonymous “probe” photo from the store generated a set of possible match photos, along with scores indicating how confident the algorithm was in the accuracy of the match.260 One of the match photos belonged to Robert Julian-Borchak Williams, a Detroit area automotive supply worker. The photo was sent to Detroit police, marked “Investigative Lead Report.”261 The police then included Williams’s photo in a “6-pack photo lineup” that was shown to the Shinola loss-prevention contractor who had provided the surveillance video.262 The contractor identified Williams as the perpetrator.263 Police apprehended Williams at his home, arrested him and took him to a detention center, and held him overnight until they conducted an interview with him the next day.264 When police showed him a still photo of the shoplifter, he held it up next to his face to demonstrate the difference between the two.265 They eventually acknowledged their mistake.266 Williams was released on bond later that day, and when he was arraigned two weeks later, prosecutors offered to dismiss the charges without prejudice.267 After the incident was the subject of a New York Times news report, prosecutors agreed to expunge the arrest and the fingerprint data from its records.268
C. AI Relationships and AI Wrongs
There is little doubt that artificially intelligent instrumentalities can replicate human action quickly, cheaply, and somewhat reliably over multiple presentations of the same limited problem space. This remarkable technological development has created a culture of deference to AI and a rational corporate interest in deploying it to achieve large-scale efficiencies. As long as algorithms achieve broad cost savings with only occasional harms, the choice to outsource relationships to AI is likely to be considered reasonable for purposes of negligence liability. Similarly, when an actor’s algorithm has been programmed to fit a predictable paradigm case and produces the desired outcome in the majority of those cases,269 it is likely to be considered nondefective for purposes of products liability.270
But AI’s relational promise should not obscure its relational limits. Human beings moving through the world use idiosyncratic, experiential knowledge and imaginative powers to put what they see and hear about others into action-relevant context and to exercise other-regarding care. Idiosyncratic knowledge and imagination are beyond the capacity of correlative algorithmic processes. AI cannot assign social context to things and people in the world. And early experience with automated vehicles, bot journalism, and facial recognition technologies shows that despite their overall safety, misfires are not just possible, but inevitable when a problem space in the real world differs from the problem space engineered by the AI programmer. An instrumental theory of tort concerned with market efficiency will ask whether the likelihood that a particular AI use would cause a degree of harm was greater than the cost savings achieved by automating a category of consumer relationships. And unless harms are widespread or unacceptably grave, few uses of AI will satisfy this test, and the rare injury that results will be framed as the necessary cost of technological progress. But a justice theory of tort concerned with relational obligations is asking a different question: Has the user of AI retained or renounced the ability to give genuine human care to the person on the other end of the technology? Retaining that ability and carrying it out incompetently is a negligence wrong; renouncing that ability altogether is a strict liability wrong.
Once strict liability is theorized as an unapologetically just sector of tort, holding actors liable for delegating care, it is easy to see the thematic similarities between AI instrumentalities that execute relational behavior and dynamite that demolishes buildings. Both reflect the user’s conclusion that he is “up here” and the other is “down there.”271 Complete delegations to AI produce relationships that are care-resistant, trilateral, extraordinarily dangerous, and properly subject to a strict liability rule. They may produce few injuries when used within similar problem spaces on multiple occasions over time, but the injuries they do produce are compensable in tort because the defendant knowingly renounced the chance to care for a fellow community member. On the other hand, partial deployments of AI that remain subject to real-time human control represent a conclusion that the user and the other are coequal members of the community entitled to mutual respect. These AI deployments create relationships that are care-sensitive, bilateral, ordinarily dangerous, and subject to a negligence rule that asks whether the defendant used his cognitive control competently.
Turning to pragmatics, when injuries arise in the AV, bot journalism, or FRT sectors, judges need only identify the underlying relational configuration to select between a negligence and a strict liability test to assess the wrongfulness of the defendant’s conduct.
For example, in the automotive market, Level 4 and 5 automated vehicles appear to involve complete delegations of human oversight.272 The deadly Arizona Uber collision discussed above provides a helpful use case. Uber considered the vehicle as a Level 4 car, designed to operate without human participation. Uber delegated all relational behavior to the car, thereby creating a trilateral relationship—Uber, AV, pedestrian—that foreclosed authentic caregiving. Uber replaced human contextualization with the kind of algorithmic, correlative cognition that will almost inevitably fall short from time to time. The car accurately sensed an impediment in the roadway but failed to classify it as a person because the algorithm was developed for an artificial problem space where people did not walk into the road. Because the car could not contextualize the person, its program did not trigger real-time injury-preventing actions like honking, steering, or stopping. Uber created a trilateral relationship unable to produce human care in real time, so a strict standard would apply to the injury it caused.273 Individual justice would result from compensation for the injured person, and community justice would result from the signal to future AI users that their programs should account for jaywalkers. In contrast, Level 0, 1, and 2 vehicles that signal when cars or people are nearby merely assist the driver, who remains fully in control of his response to the automated signal.274 Should he injure, a negligence standard would apply, and the jury would evaluate his contemporaneous care. If it found him unreasonable, individual justice would result from compensation, and community justice would be done by announcing driving norms to the rest of the group.275
Similarly, some robot journalism appears to involve complete delegation of human oversight. In the Netflix example discussed above,276 The Associated Press deployed an algorithm to replicate human news reporting without replicating human caregiving in the publication process. The algorithm accurately uptook stock data about Netflix, but failed to accurately contextualize the data as representing a stock split. Instead, it classified the data as representing a drop in stock price. This contextualization deficiency translated into action deficiencies. The algorithm did not flag the story for human scrutiny; rather, it automatically pulled narrative chunks that created an inaccurate story, and it automatically posted that story directly to the internet for public consumption without human editorial oversight. By setting the algorithm in motion and completely forfeiting the opportunity for human control, The Associated Press created a trilateral relationship—The Associated Press, algorithm, Netflix—that foreclosed bilateral relational caregiving. The Associated Press’s undertaking was extraordinarily dangerous, and it would appropriately be held to a strict standard. But a reporter might also use artificial cognition in so-called computer-assisted reporting.277 For example, she might feed datasets into an algorithm to determine the results of local housing court cases over a long-term period,278 and then use those statistics as a basis for personally conducting interviews, drafting a story, and letting editors review and post the story to the public. The relationship would remain bilateral—reporter to news subject—and any resulting injury would be subject to a negligence rule.279
Finally, some uses of facial recognition technology appear to involve complete forfeitures of human oversight. Knickerbocker Village, a New York City affordable housing complex, recently installed facial recognition technology that controls resident entry.280 Because the complex is home to a number of nonwhite residents, and FRT performs less well on nonwhite faces, “some residents have had difficulty gaining access to [the] building. It’s also unlocked the door to faces of non-residents.”281 Knickerbocker Village has deployed AI to replicate the human labor that might be performed in other buildings by doormen or by residents empowered to personally control entry and exit by using keys. By installing FRT access points, the complex forfeited the opportunity for human control of residential entry. It created a trilateral relationship—Knickerbocker to FRT to resident—that foreclosed bilateral relational caregiving. Knickerbocker’s undertaking was extraordinarily dangerous and would appropriately be held to a strict standard if injuries arose from inadvertent resident lockout or intruder entry.
On the other hand, FRT can be deployed subject to human control. For example, the Detroit police who arrested Robert Julian-Borchak Williams used FRT to identify him as a possible suspect in the Shinola shoplifting. However, the FRT completed just the sensory uptake and contextualization phases of the relational sequence. Once the technology determined that Williams’s face was a match with the face of the Shinola perpetrator, officers assumed control of the action to be taken in response to that correlation. Based on their assumption that the computer-generated identification was accurate, they immediately arrested Williams and took him into custody. Here, although the AI was supplementing the law enforcement officers’ sensory uptake and contextualization tasks by quickly going through millions of faces in the gallery to generate a match with the surveillance footage, the AI did not take any autonomous action. Rather, the investigative process preserved the opportunity for human control by forwarding that contextualization to the officers and leaving them to respond. The resulting relationship remained bilateral—Detroit police officers to Williams—and a negligence standard would apply.282 Jurors would be asked to do individual justice by identifying the officers’ failure to compare Williams’s face with the “probe” photo before taking him into custody as unreasonable, and by awarding him compensation. And they would do community justice by signaling that overreliance on FRT matches, which is known to perform poorly for Black men, is not tolerated.
Although the use of AI to supplement personal cognition does not create a trilateral relationship, and is best categorized as ordinarily dangerous, it is worth noting how the human beings charged with executing bilateral-care relationships tend to use AI assistance. As one NTSB board member evaluating a Tesla AV crash observed in 2016, many owners “may conclude [from the “Autopilot” label Tesla assigned its suite of driver-assisting AI mechanisms] that they need not pay any attention to the driving task because the autopilot is doing everything,” and they may suspend their personal caregiving accordingly.283 The same phenomenon may have contributed to the wrongful arrest of Robert Williams. When, during a post-arrest interview at the police station, Williams held up the Shinola surveillance footage next to his face and asked the officers to compare the two, they finally acknowledged the match was off and marveled that “the computer got it wrong.”284 “[O]verestimation of the power of AI” does not create trilaterality or justify strict liability.285 But as AI is integrated into daily life and users grow more habituated to its use, one can imagine that insufficient vigilance in overseeing its use may come to be understood as a species of fault on par with carelessness, laziness, and selfishness.286
In sum, reviving strict liability as a just and robust counterpart to negligence liability—understanding both kinds of liability as valid responses to different kinds of genuine relational wrongs—positions tort to evaluate AI technologies from the standpoint of the community as a whole and not from the standpoint of institutional users. It does not begin with the instrumental assumption that socially beneficial innovations must be subsidized by flexible liability rules, or that socially undesirable innovations must be deterred by strict rules. Rather, it foregrounds the question of how a particular undertaking aligns with the community expectation that all members are coequal agents who must act with, and be treated with, care. Once relational equality is placed at the center of tort, some liability findings will wind up facilitating AI innovation and some will wind up burdening it. Crucially, however, both outcomes will proceed from an interest in community justice rather than institutional favoritism toward preferred market actors.
D. The Limiting Principle of Community Acceptance
If creating trilateral relationships is a genuine wrong subject to a strict liability rule, the no-fault category will reach more broadly than previously recognized. That said, a justice theory of strict liability is still subject to limiting principles. The prosocial orientation of tort law means that if a community of relational participants embraces AI or other trilateral technologies as desirable, the relationships that result cannot be considered genuine community wrongs. Unsurprisingly, existing tort doctrine captures this insight. The Restatement provides that even when an undertaking is extraordinarily dangerous, it is not subject to a strict rule unless it is also uncommon.287 Accordingly, the use of nonhuman instrumentalities may be held to a negligence rule if community members’ commonplace engagement with them indicates an appetite for their benefits and a tolerance of their risks.
It is beyond the scope of this Article to propose a device for assessing the “commonness” of a given trilateral undertaking. Still, a word is in order. Unlike political community, which could vote on which technologies are “common,” sociological community has no comparable formality. Instead, as in other areas of community norm creation, tort often empowers judges to treat seismic shifts in behavior as diffuse evidence of community sentiment.288 But in the AI arena, this method of assessing commonness should be handled with care.
When powerful market actors unilaterally decide to conduct consumer relationships via AI, consumers may lack the knowledge, power, or financial resources to object. Pedestrians cannot limit themselves to crosswalks free from autonomous vehicles. Criminal suspects cannot opt out of police coercion premised on the results of faulty facial recognition technology.289 News subjects cannot insulate themselves from bot coverage in the press. So, behavioral shifts may be a poor indicator of community acceptance in this realm. Indeed, the very fact that those exposed to AI injuries lack the social power to opt out of trilateral AI relationships reinforces the argument that a relational rule of strict liability has a role to play as the market continues to automate care.
Conclusion
Tort is a body of law that does justice between people. This simple fact explains strict liability. Community members who delegate their labor to nonhuman third parties rupture the relational web that tort seeks to construct. Human cognition is what enables community members to know their own desires and imagine the desires of others. This capacity for social imagination empowers human beings to balance self-interest with other-regard. Put plainly, human cognition is the precondition for human care. Actors who outsource their relational labor to inanimate proxies have renounced the care-producing work that the community demands. This choice may save time and money and proceed safely in most cases. Still, these actors are distorting the bilateral, care-oriented relationship at the center of tort into a trilateral counterfeit executed by a proxy incapable of social imagination or real-time injury prevention. So, whether or not these delegations are reasonable, they are wrong.
Identifying the relational wrong that lurks beneath some reasonable behavior closes a gap that has long plagued the justice theory of tort. And it promises to revive the declining strict liability sector just in time to meet the challenges of an increasingly automated society. State and market actors have begun to replace drivers, journalists, and police officers with AI proxies. Already, these efficient AI intermediaries have killed, defamed, and jailed because they were programmed to execute relational behavior without human oversight. Which tools of tort can foreground humanity in the age of automation? Just strict liability.