War Torts, Autonomous Weapon Systems, and Liability: Why a Limited Strict Liability Tort Regime Should Be Implemented

INTRODUCTION

Artificial intelligence (AI) has become a staple in many people’s daily routine.1 Commuters use ridesharing and Global Positioning System (GPS) applications;2 teachers grade and assess essays through plagiarism checkers;3 the general population receives emails through spam filters and smart email categorization;4 and personal financial account holders use mobile check deposit, fraud prevention, and credit decision features.5 Activities as simple as online shopping, social networking, and texting via voice-to-text features on a smart phone are all powered by some form of AI.6 While these shifts have been beneficial and have made lives easier, AI has and will continue to impact the military sphere more severely.7

Although autonomous weapons have been the subject of many movies, such as RoboCop and Terminator, these weapons are slowly becoming commonplace in the military.8 Some people use drones as recreational tools,9 while the military uses drones for an entirely different purpose: to target and launch missiles on specific geographic areas.10 These have become the norm within international warfare, and autonomous weapons in the form of drones, robotic soldiers, and pilotless military planes will continue to develop.11

While development of AI technologies has been consistent, regulation has not been at the forefront of AI sophistication and dominance.12 Some believe that Isaac Asimov’s Three Laws of Robotics13 continue to hold significance for AI governance.14

In 1950, Isaac Asimov set down three fundamental laws of robotics in his science fiction masterpiece I, Robot. (1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; (2) A robot must obey the orders given [to] it by human beings, except where such orders would conflict with the First Law; (3) A robot must protect its own existence, as long as such protection does not conflict with the First or Second Laws.15

These laws especially seem to have some weight considering “[a] robot without AI software would not be subject to Asimov’s laws, even if these laws had any real legal significance.”16 Nonetheless, regulation surrounding these speedily developing machines is virtually nonexistent.17

Part I of this Note will discuss the evolution of AI from early computer models in the 1980s to sophisticated and standalone thinking machines in today’s world, while touching on some of the drawbacks. It will also discuss the evolution of AI within the military, encompassing the current state of the debate over the use of autonomous weapons. Further, it explores the already proposed regulatory changes that surround AI and their hypothetical impacts on military AI and autonomous weapon systems. Part II first delves into factors to consider in regulating autonomous weapon systems. It then analyzes whether an AI machine can be considered human, while exploring the concept of war torts and its intersection with AI. Finally, Part III proposes a limited strict liability tort regime standard for regulating autonomous and semi-autonomous weapons, particularly focusing on a standard that will anticipate and account for issues facing evolving AI. This standard will attempt to propose issues, such as machine and reinforcement learning, which are becoming more sophisticated within AI. It will also discuss how AI-influenced weapons account for moral decisions that humans make intuitively, identify how sovereign immunity plays a role, and detail how an engineering design standard for these autonomous weapon systems is imperative. These inclusions will be guided by the history of AI and challenges that the field already faces.

I. BACKGROUND: THE EVOLUTION OF ARTIFICIAL INTELLIGENCE

A. Technological Developments and Advancements of Artificial Intelligence in the Last Thirty Years

Although it may seem like AI has only become a household commodity in the last decade, the reality is that it has been a developing aspect of daily life for decades. In fact, it can be traced back to ancient Greece.18 But AI transformed from fiction to reality around 1950 when Alan Turing created “the idea of machines that think.”19 Although the idea did not take immediately, the 1950s brought about sufficient revolution in the AI sphere: in 1956, “artificial intelligence” was referenced for the first time by computer scientist John McCarthy.20 Then, in 1959, an AI laboratory was founded at the Massachusetts Institute of Technology.21 Although the 1960s and 1970s saw difficulty and criticism in AI progression, many areas, such as logic programming and common sense reasoning, were also explored in the AI sphere.22 Subsequently, the revelation and popularity of the personal computer created greater intrigue around machines that think.23

By the 1980s, “[r]esearchers had come to believe that . . . intelligent behavior depended very much on dealing with knowledge, sometimes quite detailed knowledge, of a domain where a given task lay.”24 Expert systems25 thrived within the corporate world at this time, as the majority of Fortune 1000 companies used them for daily business activities.26 However, the general consensus that computers needed to become more knowledge-based meant that many countries around the world put more funding into various computer projects geared towards interpretation, translation, and learning.27

In 1997, IBM’s chess-playing super computer “Deep Blue” became the first computer to defeat a reigning world champion in a game of chess, marking a milestone for AI as it was able to compute 200 million moves per second—an unprecedented feat.28 Following this accomplishment, IBM sought more challenges. Its next great success in the AI sphere was the development and success of “Watson.”29 Watson analyzed questions and content comprehensively and quickly and eventually won Jeopardy! against former champions.30 It understands natural language through a combination of sophisticated hardware and software that delivers a precise answer with evidence to support it, allowing the machine to win.31

Watson then sparked the development of ROSS, a legal research tool that will improve research time and results for law firms.32 ROSS does this by sifting through over a billion text documents per second and then displaying the exact relevant passage a user, who has asked a question in natural language, needs.33 ROSS also gets smarter over time by learning from feedback.34 Fundamentally, ROSS and Watson are learning to understand the law, rather than just generating results from key words for users.35 ROSS has become more sophisticated since the start of its development. So much so that in 2016, one of the United States’ biggest law firms, BakerHostetler, hired ROSS as a legal researcher in its bankruptcy practice in New York.36 This marked ROSS as the first robo-lawyer in the country. Other prestigious firms in the country, such as Simpson Thacher & Bartlett and Latham & Watkins, have since begun to use ROSS as well.37 This is only one example of the progression of AI and the reality of machines learning and becoming more autonomous through the algorithms they initially possess.

While the above illustrates its development and impact in the technology industry, AI has simultaneously affected other industries as well. For example, deep learning algorithms have been successful in radiology, pathology, ophthalmology, and cardiology.38 AI has shown a ninety-six percent accuracy rate in detecting the presence or absence of tuberculosis in patients—better than many human radiologists.39 These AI machines go through so-called “training” where they are shown hundreds of x-ray images from patients with or without a disease, such as tuberculosis, until the AI learns to detect what the x-ray is presenting at that moment.40 Once trained, an AI is able to detect the spread of certain diseases as accurately, if not more so, than a human.41 Some AIs have even detected changes in diabetics by looking at images of patients’ retinas at a slightly more accurate rate than human physicians.42

Today, nearly every major technology company, including IBM, Microsoft, Google, and Facebook, has laboratories specifically dedicated to AI research and development.43 These labs are based all over the world, and they focus not only on furthering AI algorithms, but also on developing AI’s ability to use reinforcement learning, which, if successfully enforced, will allow AI machines to think for themselves more than they do now.44 This concept has the potential to reimagine the impact of AI in the military in addition to the technology industry as a whole.

B. Artificial Intelligence in the Military

Like the origin of AI in the technology industry, AI in the military dates back to World War II in the form of Goliath Tracked Mines.45 Germany was the first country to deploy and use “remotely piloted—as opposed to preprogrammed—aerial drones.”46 Throughout the war, the Allies quietly established and worked on remote-controlled weapon programs, though there was too much volatility in the eventual products for them to be used consistently.47 After the War, development of remotely operated weapons slowed considerably.48 The U.S. Army and U.S. Navy were tasked with furthering research and development, while the U.S. Air Force deemed any unmanned aircraft to be “a professional threat.”49

Throughout the Vietnam War, the U.S. Army manufactured unmanned reconnaissance aircrafts, which included an air-launched, jet-powered drone that completed nearly 3,500 missions.50 After these systems stopped being used in the mid-1970s, the military had little progression and success with AI and automated systems until 1995, when there was a so-called “magic moment.”51 GPS-equipped, Unmanned Aerial Vehicles (UAVs)52 were created and could be dispatched anywhere in the world for reconnaissance and targeting missions.53 These systems collected updated information on everything, including air defenses and refugee movements.54 Eventually, the military began to use a machine called Packbot, which facilitated “intelligence, surveillance and reconnaissance; battle damage assessment; hostage/barricade situations; and explosive ordinance disposal.”55 Packbots are relatively small machines, measuring 20.2 inches wide, 34.6 inches deep, and standing 7 inches high.56 These compact dimensions give them the ability to climb stairs, search tunnels, examine equipment for explosive materials, and provide soldiers with a safe first look into an area.57 Although Packbots were predominantly used on missions in Iraq and Afghanistan,58 the U.S. military now possesses more than 12,000 ground robots and 7,000 UAVs in its inventory.59

While the Packbot and the military’s other inventory items have been used primarily for extensive reconnaissance purposes, there has been great debate surrounding autonomous weapons or, as some have characterized them, “killer robots.”60 Killer robots are weapons systems that have the ability to select and fire on targets without human control.61 Control refers to the who, what, where, and how of weapons use, as well as the effects of their use.62 So far, the expectation is that weapons use requires some level of human involvement.63 In the summer of 2017, however, Kalashnikov Group, a Russian weapons manufacturer, announced that it had not only developed fully automated combat robots, but that the range of robots also used AI to identify necessary targets and make independent decisions.64 This description sounds eerily similar to that of a “killer robot.”65

Fundamentally, one of the main sources of AI in the military is autonomous weapon systems.66 Semi-autonomous weapon systems67 are also included in the military’s terms.68 Additionally, distinctions have been made between an autonomous system and an automated system.69 The removal of meaningful human control—or a “man-in-the-loop” element—allows robotic weapons to pinpoint and attack particular targets of its own calculation, constituting an autonomous system.70 Automated systems, on the other hand, support the “man-in-the-loop” element, as they mainly assist weapons operators with their tasks.71

The Pentagon first addressed fully autonomous weapons systems in 2012 when it released Directive Number 3000.09.72 For up to ten years, the Directive “generally allows the Department of Defense to develop or use only fully autonomous systems that deliver non-lethal force.”73 Considering that AI and robotics researchers believe that the use of autonomous weapons systems may create an international AI arms race,74 these systems require attention sooner than 2022, especially with the Kalashnikov Group’s announcement.75 Although there are a plethora of considerations that come with the development and existence of these weapons, there is considerable debate over whether killer robots should be allowed or used at all.

C. The Controversy Surrounding Autonomous Weapons

The topic of autonomous weapons, particularly lethal autonomous weapons systems, has been controversial.76 In 2015, thousands of AI and robotics researchers, including Elon Musk and the late Stephen Hawking, signed an open letter urging the United Nations to ban the development and use of AI-generated weapons.77 The letter argues that a ban on autonomous weapons would be beneficial to humanity.78 It describes these autonomous weapons as being “ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group,” none of which are actions that would make battlefields safer for soldiers or society a safer place to live.79 The letter argues that these weapons would inevitably be sold on the black market and would give whoever holds them too much power, especially if they fall into the hands of terrorists, dictators, or warlords.80

In November 2017, the Future of Life Institute, which is supported by Musk and Hawking, released a video depicting life with little flying killer robots having a mind of their own, targeting anyone from senators to students.81 The video showcases a small, explosive drone that could target anyone in the world after collecting data from something seemingly as harmless as a hashtag by using its cameras, sensors, and facial recognition software.82 It notes that while it is not guns that kill people, but rather people who kill people, these machines have a processor that reacts up to one hundred times faster than any human and, to top it off, unlike humans, these machines do not get emotional or disobey orders.83 The end of the video reveals a harsh reality: Stuart Russell, a professor of computer science at the University of California, Berkeley, explains that the film was more than speculation—it was actually a depiction of the integration and miniaturization of technologies that already exist.84

The video was shown at a meeting of the Convention on Certain Conventional Weapons (CCW) at the United Nations on November 17, 2017. The CCW bans or restricts the use of particular weapons that are either deemed too dangerous or may cause unnecessary suffering.85 In the past, weapons, such as blinding lasers, have been preemptively banned before they were acquired or used.86 This time, the CCW meeting included over eighty countries discussing the future of autonomous weapons systems and whether a ban should be put in place.87 While the United Nations has yet to make a decision on said ban, the United States has stated that any engagement involving lethal force must have human approval, meaning autonomous weapons can currently be deployed only for non-lethal missions.88 Although this seems like a rational policy now, the meeting put an emphasis on defining killer robots and how much human interaction needs to be involved in order to allow these weapons to persist.89 While these issues were not resolved at the November 2017 Convention,90 the results, or lack thereof, showcase just a couple of drawbacks in the current state of military AI machines.

D. A Shortcoming: Defining Artificial Intelligence

AI is constantly changing, and many claim that to properly regulate the field, AI needs to be defined more concretely.91 Even though AI has become relatively present in daily life, a uniform definition has yet to be established.92 Although most have shied away from defining AI, there are various frameworks that attempt to define and characterize it.93

From a philosophical and scientific standpoint, AI has been organized into four categories: (1) system thinking as human; (2) system acting as human; (3) rationally thinking system; and (4) rationally acting system.94 These categories encompass the criteria scientists and philosophers have advocated for in defining AI.95 Modern information technology sciences have adopted the fourth criterion that aims for systems to behave rationally rather than to think and imitate human-like behavior.96 This is supported by AI technology’s goal to learn from its collected data.97

Additionally, AI brings about many concerns through machine learning, a subfield of computer science that encompasses computer programs that learn from experiences and are then able to improve their performance.98 Today, this is applied mainly in Internet search results, facial recognition, fraud detection, and data mining.99

As AI becomes more sophisticated, it can further be divided into two additional categories: semi-autonomous and fully autonomous.100 Wherever a human is able to operate or override a machine, it is, at best, semi-autonomous.101 Completely autonomous machines, on the other hand, have a distinguishing factor: rather than being “tools used by humans; they will be machines deployed by humans that will act independently of direct human instruction, based on information the machine itself acquires and analyzes.”102 These machines may make influential decisions in circumstances that the machine’s creators may have never considered or addressed.103

While autonomous and semi-autonomous weapons have been defined, the CCW’s convention shows that a definition for killer robots is necessary.104 Although there is no established definition for killer robots, their characteristics are expository.105 The main commonality in publications’ definitions of killer robots is the robots’ abilities to pinpoint and engage targets without any human input.106 This aspect seems to encompass killer robots—or fully autonomous weapons—but, regardless, they do not serve as an official definition. The ambiguity and reluctance to create a definition for killer robots may stem from the lack of a concrete definition of AI generally.

Semi-autonomous and fully autonomous AI machines create different and prevailing liability issues that have yet to be fully addressed by legislation or regulation. This will be discussed in detail in the forthcoming Section.

E. Regulations Surrounding Artificial Intelligence

AI liability and regulation continue to be under-defined. However, academics have explored legal doctrines as they apply to autonomous machines in the context of tort law, contract law, and the law of war.107 In its current form, AI liability has been divided into different categories based on three situations:

First, a self-driving vehicle collides with a human and harms him. Second, a computer program operated by an online business enters into a contract with a human being where the online business did not authorize the contract. Third, an autonomous weapons system capable of selecting its own targets fails to distinguish between civilians and military personnel.108

Products liability law mainly governs issues about self-driving vehicles with potential impact through agency law.109 Online contracting is assessed through agency law.110 Autonomous weapons systems implicate doctrines of command responsibility111 and state responsibility.112

Notably, there has been no federal agency tasked with creating regulations or assessing new AI technologies that go to market.113 While Matthew Scherer lays out a comprehensive proposal for a federal agency for AI,114 some established agencies have already taken to analogizing certain AI systems to existing regulatory schemes to both gain jurisdiction and address them in a more relevant legal context. For example, the Federal Aviation Administration (FAA) characterized drones as aircrafts, creating many problems and limitations in their uses.115 Additionally, the Food and Drug Administration (FDA) analogized surgical robots to laparoscopic surgery, which allowed these robots to go to market quicker.116

Currently, aside from some agencies analogizing to existing products, each instance involving AI that results in liability is judged through a comparable area of law, such as agency or contract law.117 While the need for an overarching organization that oversees AI standards and regulations has been debated,118 the reality is that the field as a whole is not yet as highly regulated as others.119 Though an overarching regulatory body may be a sufficient, temporary solution for some products, autonomous weapons and general AI used in the military are likely to present more difficulties.

As is evidenced through its advancements, AI has been applied in a number of ways in industry, military services,120 medical services, science, and games.121 This shows that AI and autonomous systems as a whole will not only become widespread among industries but will also make a substantial impact on how jobs are performed and how malfunctions or accidents are addressed.122 However, even though many benefits have encompassed the evolution of AI, it is difficult to understand the full impacts AI will have until regulation is considered.

F. Existing Proposals: Potential Regulatory Changes and Their Hypothetical Impacts on Artificial Intelligence in the Military

There are various published proposals that lay out policies and frameworks that should be enforced in order to better regulate the AI sphere.123 The most encompassing proposal details a potential agency that will use a particular liability standard by which AI will be evaluated before it has an opportunity to go to market.124 Scherer proposes legislation titled the Artificial Intelligence Development Act (AIDA), which would create an agency that certifies AI systems’ features and overall safety.125 Beyond legislation, it has been suggested that an agency be established in order to test and certify AI machines before they reach the market, similar to the responsibilities of the FDA.126 The new agency Scherer proposes would have two missions: policymaking and certification.127 However, this also presents certain problems, such as agency personnel not having the appropriate knowledge to properly judge quickly developing AI systems.128

While a new agency is an enticing idea, some agencies have already linked AI systems to existing regulation in order to gain jurisdiction and attend to these systems in more relevant legal contexts.129 Similarly, arguments have been made for the need to establish a so-called “Federal Robotics Commission” that will aim to “deal with the novel experiences and harms robotics enables.”130 This may have a potentially useful application to the military environment, since killer robots and autonomous weapons systems as a whole have the potential to wreak havoc if there is even a slight deviation in their algorithms.131

In December 2017, Congress proposed a bill titled Fundamentally Understanding The Usability and Realistic Evolution (FUTURE) of Artificial Intelligence Act of 2017 (FUTURE of AI Act),132 establishing a federal advisory committee that would oversee the development and evolution of AI.133 The committee’s goals are to promote innovation; to optimize development of AI; to promote and support development and application of AI; and to protect the privacy of individuals.134 While this may be beneficial to society in the grand scheme of work force productivity and technological innovation, it does little to regulate the potential negative consequences of AI’s sophistication, specifically within autonomous weapons systems.135

Until recently, arguments have been made to predominantly use products liability in legal proceedings if an autonomous machine ever malfunctioned.136 Others have suggested that courts should apply a negligence standard to cases involving certified AI and a strict liability standard to cases involving uncertified AI.137 The imperative issue of distinguishing liability between a designer, manufacturer, distributor, and operator would be a central part of the liability scheme in these cases.138 Although products liability has been a basis for determining liability in AI,139 specifically in relation to self-driving cars, the same concepts may be difficult to apply to the military and to autonomous weapons.140 Applying products liability in this industry means that anyone from the programmer or robotics developer—who creates or designs the weapon—to the manufacturer of the weapon may be liable for any damaging result of a mission.141 This scenario may specifically create problems when autonomous weapon systems begin to learn and develop, getting further away from the initial program that the programmer designed or the manufacturer built.142

Based on the scope of today’s legal framework, it may be difficult to account for the progression and evolution of AI. As a result, the current system may not be equipped to evaluate the legal repercussions that occur from possible misuse of military AI, specifically autonomous weapons systems. Therefore, a different framework will need to be enacted.

II. ANALYSIS

A. Considerations in Regulating Autonomous Weapons

While a new regulatory framework is needed to govern autonomous weapons systems, various characteristics of AI systems impact these weapons systems and will need to be seriously considered before any structure is enacted.

1. Machine and Reinforcement Learning

Machine learning, as well as reinforcement learning, are two aspects that will need to be considered in relation to AI as its technology advances.143 First, machine learning involves computer algorithms that can “learn” or improve their performance on a given task as time passes.144 Reinforcement learning, a category of machine learning, entails experimentation.145 Reinforcement learning is already prevalent in some forms of AI. For example, a computer developed by a subsidiary of Alphabet learned and mastered Go, a notoriously complicated board game, and eventually beat one of the world’s best human players.146 It is likely that this type of learning will begin to flourish within AI. It not only improves self-driving cars, but the same technology also allows robots to grasp objects it has never encountered before, and it can figure out the optimal configuration for the equipment in a data center.147

While reinforcement learning in AI machines is still in relatively early stages of development, it will eventually become sophisticated and prevalent across many AI machines and throughout various sectors, including the military. While learning has great potential for the evolution of these AI systems, it may create further difficulties to determine liability in the event of a harmful situation or event because these machines will eventually become sophisticated enough to make their own decisions and come to their own conclusions without a human’s influence.148

2. The Ethics Problem: Machines Making Moral Decisions

While AI continues to develop, the possibility of AI systems thinking and making decisions in certain situations in the future, especially where autonomous weapons are involved, must be considered. This state of affairs may become particularly problematic with fully autonomous weapons that remove all levels of human interaction.149 There are many potential issues that arise with machines learning as they go and detaching themselves further from their initially engineered prototypes.150

Machine learning technologies lack intuition, which is an important characteristic humans possess that cannot be engineered into technology.151 Intuition is sometimes essential to properly assessing and reacting to a situation.152 Although the military has extensive opportunities to develop a soldier’s training in the field, there are still situations in which a person’s intuitive judgment and so-called “gut feeling” need to be taken into account to respond to a situation.153 At this stage, artificial intuition is a concept that some have proposed to be a subset of AI, but it has yet to have widespread implementation.154 It is difficult to imagine an AI system ever being able to generate similar feelings, considering that these systems are built in such a scientific manner.155 For this reason, “[t]he best way forward is for humans and machines to live harmoniously, leaning on one another’s strengths.”156 While this is a nice sentiment, it is difficult to incorporate into a liability analysis.157

When considering autonomous weapons systems, specifically, the main legal and moral issue is the act of assigning human decision-making responsibilities to autonomous systems that are designed to kill humans.158 What does this mean?

Common examples of moral decisions that need to be made by autonomous AI machines are seen in driverless cars.159 For example, if Person A’s car is speeding down a road and a school bus with twenty children crosses its path, does Person A swerve and risk their own life to save the children or does Person A continue driving, potentially placing the bus full of children at risk?160 Likewise, if a pilotless, completely autonomous aircraft is traveling with explosives that need to hit a target, but then data presents that there are one hundred civilians in the midst of the targeted terrorist, what does the machine do in that instance? These are the types of decisions that humans contemplate in scenarios that present themselves, and computers will need to make these calls in milliseconds. One notable disadvantage is that these AI machines are completely devoid of human compassion.161 This may create a different standard for judgment within liability if a mission is to go wrong.

While decisions constitute a large part of humans’ daily lives, as machines continue to learn and as machine and reinforcement learning continue to flourish, a dilemma is presented for liability. There are two perspectives to consider: (1) regulating and creating standards for programmers; or (2) creating a framework to regulate the actual machine. With the former, it must be noted that regardless of the advancements of AI technology, a human being—who is bound by the law—will always be at the starting point of these systems.162 It would even be possible to keep the current liability framework unaltered, since anywhere human involvement is evident, a human would be responsible for the wrongful acts committed by or involving a machine.163 With the latter, there is ample opportunity for regulatory innovation.

B. Can Artificial Intelligence be Considered Human?

To determine the answer to this question, we must ask what exactly constitutes the characteristics of being human? This Note previously discussed machine and reinforcement learning.164 It can be argued that the process of thinking is directly related to characteristics of a human, but much of the progress in AI development shows that machines exhibit more characteristics similar to those of humans.165 Although case law would provide the most direct answer to this question, it is, unfortunately, likely unhelpful because, so far, the only AI-related lawsuits that have occurred have had to do with patents on the robotics.166

Though case law is unhelpful in the quest to determine whether AI can reach personhood to the point where it will have legal ramifications, Autonomous Intelligent Systems (AIS) have also been at the forefront of the discussion of impact on society and are useful in determining legal personhood.167 AIS not only perform tasks like those of other intelligent machines, but they are also sophisticated enough to have the ability to interact with each other and with human beings.168 There are now even institutes dedicated to researching robot morality and determining just what encompasses a “friendly robot”169—seemingly fitting when there are also killer robots on the other end of the spectrum.

Other factors have also been considered when discussing an AI’s personhood. For example, courts have had a loose interpretation of personhood for artificially-created business entities.170 There have already been arguments as to whether an AI can own real property, and the likelihood of that becoming a reality is not far off—though with a few strings attached, namely the discretion and management of a group, such as a Board of Directors.171 It would be unsurprising if a military equivalent will be a topic for discussion in the near future.

Further, the Restatement (Third) of Agency includes an individual, an organization, and a government in its definition of “person.”172 Notably, it includes “any other entity that has legal capacity to possess rights and incur liability.”173 There has been debate over whether an AI system can feasibly fall under the category of “any other entity.” However, the Restatement also specifies that:

[a]t present, computer programs are instrumentalities of the persons who use them. If a program malfunctions, even in ways unanticipated by its designer or user, the legal consequences for the person who uses it are no different than the consequences stemming from the malfunction of any other type of instrumentality.174

As one academic states, an AI or AIS “is an instrumentality of the person who presses ‘go,’ even though the complex computer program promises to act fully autonomously.”175 But what happens when a machine learns for itself and becomes sophisticated enough to make its own decisions without the human interaction that currently swarms most autonomous functions? Neither the Restatement nor any other publication has explicitly included guidelines for this issue.

Even though there is extensive debate surrounding whether AI systems can reach legal personhood, arguments have been made for machines attaining a place in this category.176 One sociologist explains that, while intelligence is a relatively obvious factor in both humans and AI machines, this is not all that is considered in people; sentience, consciousness, and self-awareness are also vital traits of humans.177 The ability to feel things, awareness of one’s body and surroundings, and recognition of that consciousness are all factors that arguably make humans who they are.178 While machines may be as smart or smarter than humans, these aspects of emotional intelligence come into question when debating AI’s humanity. Laws cover the mental incapacities that certain humans may experience, so could an AI’s capacity be similar to that of a mentally insane person? The capabilities of AI have been analogized to those of mentally limited people, such as children, those who are mentally incompetent, or generally to those who lack a criminal state of mind.179

Lastly, AI might surpass humans in curiosity and desire to learn.180 Once an objective is defined for an AI system, achieving that goal becomes the machine’s top priority.181 This becomes not only the system’s biggest motivation but also essentially an obsession.182 This is another aspect of AI that is similar to human traits, furthering the case for AI to be able to be considered on par with humans. Autonomous weapons are particularly relevant here. Considering the fact that these weapons select and target specific individuals, places, or things, it is easy to see how hitting that target would become the weapon’s number one priority.

C. Can a Machine Have Intent?

While targeting and achieving a particular goal may become an AI’s or, specifically for the context of this Note, a fully autonomous weapon’s sole priority, another important question is whether a machine can have intent. It would be difficult to establish that an AI system has its own intent at this point in time, despite AI’s sophistication.183 Autonomous weapons systems can act with one goal (or target) in mind, but it is unclear whether this would equate to acting with intent.184 Some argue that intentional torts will not apply to AI.185 In fact, a distinction is made between acting with intent and acting intentionally based on limited, established functions that an AI is programmed to exhibit.186 Notably, AI that assists officers and armed forces is distinguished in this capacity, and though it may be difficult to hold AI to the standards of intentional torts, there are still tort actions that may be brought in instances where military AI and autonomous weapons systems are used.187

If AI is like a mentally-limited individual,188 then it may be judged based on similar systems of liability.189 For example, in tort cases involving a person who is mentally incompetent, that person is considered a mere instrument rather than an actual perpetrator.190 Based on this brief analysis, two other options exist. The programmer creates the AI or autonomous weapons system with some sort of specific intent to cause harm; or a programmer or another individual involved in the creation process did not act intentionally, making the state responsible for its grievances.191 This is where the concept of war torts comes to the forefront.192

D. War Torts

War crimes193 and, more broadly, the law of war194 have been at the center of international law for centuries. Historically, states rather than individuals were responsible for these crimes.195 This responsibility has now largely shifted to the individual, as individual criminal liability has become a focus of war crimes, while state liability has decreased.196

In its current state, the law of state responsibility indicates that a state may owe an international legal obligation to individuals, another state, or the international community in its entirety.197 While internal legal systems include separate civil and criminal responsibilities of its citizens, state responsibility does not include such a distinction.198 Generally, states are responsible for investigating and prosecuting war crimes committed by their nationals and members of their armed forces and for war crimes committed on their territory.199 Additionally, states need to be aware of crimes committed by non-state actors, such as “individuals or entities empowered to exercise governmental authority,” those who “act under a state’s direction or control,” and “private individuals or entities which the state acknowledges and adopts as its own.”200 However, there has been a global shift from state responsibility to individual criminal liability in the realm of international war crimes over the last seventy years.201

While individual criminal liability is the leading regime for international war crimes, war torts have also been discussed by academics.202 Rebecca Crootof proposes “explicitly identifying ‘war torts’ as serious violations of international humanitarian law that give rise to state responsibility.”203 She puts emphasis on the fact that the structure can be similar to that of internal domestic law: an individual action may be both a war tort and a war crime.204 The war torts regime would be tailored mainly to international wrongful acts,205 much like regular tort actions are brought for wrongful acts. Importantly, while criminal law contemplates moral culpability, tort law does not; rather, it aims to minimize accidents and deter others from engaging in similar behavior.206

Crootof identifies four reasons and benefits for implementing a war torts regime: (1) “[i]t would clarify the applicability of the law of state responsibility in armed conflict”; (2) it would create an international norm for lawful behavior if states accept fault and take responsibility; (3) it would “deter states from employing means and methods of warfare that result in serious violations of international humanitarian law”; and (4) it would allow individuals injured from internationally wrongful actions to seek and accept remedies, which would not be possible in solely a war crimes regime.207 This kind of regime would become especially useful with the establishment of autonomous weapons and more general military AI systems.

E. Artificial Intelligence in a War Torts Context

Crootof lays out a quintessentially lawyer-like answer to the question of who should be liable when an autonomous weapons system acts wrongfully: she says it depends.208 Either international criminal law should govern when an autonomous weapons system is used recklessly or with intent to commit a war crime.209 Or, states should sometimes be held responsible for war torts with respect to certain war crimes and some instances where no individual acts willfully.210 This posits bringing state responsibility back to the center of international humanitarian law when there is a clear war tort committed.211

International law is usually formulated as law governing states rather than individuals.212 This concept would be reinforced when AI and autonomous weapon systems are brought into the equation. States are better equipped to deal with tort claims brought due to the wrongful acts resulting from autonomous weapons.213 In practice, states will be responsible for developing, purchasing, and integrating autonomous weapon systems into their military entities,214 accounting for most aspects of the liability chain.215 Additionally, states could internalize any costs from weapons that commit crimes.216 The shift to state responsibility for autonomous weapon related liability would only require a clarification of the applicability of existing law rather than the creation of completely new regimes, making this a significantly more plausible option.217

Once this is clarified, there are several options of tort liability regimes that can be implemented to govern war torts. These include strict liability,218 negligence liability,219 an integrated international and domestic liability regime,220 an independent tribunal system,221 or a limited strict liability tort regime.222 While each of these present interesting arguments, this Note argues for the implementation of a limited strict liability tort regime.

III. PROPOSAL: IMPLEMENTING A LIMITED STRICT LIABILITY TORT REGIME AS THE STANDARD OF JUDGMENT IN CASES

A. The Standard—Autonomous Weapons and State Liability

Because lethal autonomous weapon systems have the potential to be substantially more dangerous than semi-autonomous and non-lethal autonomous weapons, they should be governed by a strict liability standard. Generally, strict liability is applied much more narrowly and under more strict circumstances than any other theory of liability within tort law.223 Most commonly, strict liability is applied to situations with animals, some nuisance cases, libel, misrepresentation, vicarious liability, workman’s compensation, and ultra-hazardous activities.224 Although the use of fully autonomous weapon systems, specifically those that are lethal, is not within the confines of any of the initially listed categories, its use arguably can fall under the category of ultra-hazardous activities. In this scenario, an ultra-hazardous or dangerous activity is performed, and a defendant is held liable for an injury even if there is absence of any fault.225

This concept traces back to the original case establishing strict liability: Rylands v. Fletcher.226 The English court in this case differentiated between a “natural” and “non-natural” use of land.227 American courts have generally adopted Rylands, but remain reluctant to impose strict liability without considering more.228 This means that courts will take not only the activity into account, but also the area and circumstances under which it is being executed.229

The Restatement (Third) of Torts considers four main factors when strict liability is being debated. An abnormally dangerous activity will provide for strict liability if (1) the activity creates a foreseeable risk of physical harm; (2) the risk is a “highly significant” risk; (3) the risk remains “even when reasonable care is exercised by all actors;” and (4) “the activity is not a matter of common usage.”230 The strongest case within this realm of strict liability is where a defendant knows and understands the significant risk the activity poses but decides to follow through anyway.231 In these instances, strict liability will undoubtedly govern.232

Autonomous weapons have an inherent highly significant risk associated with them. Analyzed through both the Rylands approach and the Restatement (Third) of Torts approach, strict liability would be appropriate to govern fully autonomous weapons. Based on Rylands, while, arguably, using autonomous weapons is a function that the military is privileged to exercise, these weapons will not have meaningful human contact.233 This puts them out of the realm of “ordinary” uses of similar items,234 especially considering human control would not be guaranteed with fully autonomous weapons.235

The Restatement provides a stronger reason for lethal autonomous weapon systems to be governed under strict liability. Autonomous weapons are inherently dangerous and pose significant risk regardless of their sophistication. Because killer robots will have the ability to select and target on their own, rather than through human guidance, there is no indication as to how their paths will change from the point at which they depart to the point at which they hit a target. Machine learning reinforces this risk since these killer robots can eventually become sophisticated enough to act and think on their own.236 This scenario also provides a case where a negligence standard would fail because there would be no way to avoid the risk when the machines have a mind of their own, unless these weapons are not used at all.237

Take, for example, the miniature drone-like lethal autonomous weapon in the “Stop Autonomous Weapons” video238 played at the CCW convention. If that weapon was deployed with one target in mind, but then reexamined the data in its software and determined that a different person should be targeted instead, it would undoubtedly follow the new route rather than the original.239 Meanwhile, the military that was going to control the operation does not have any meaningful input or control since the killer robot is fully autonomous.240 Therefore, there is no reasonable way for one to avoid the risk that is created since a human would neither be able to change the machine’s course nor be able to stop the machine. Although this is dangerous, in this hypothetical, the state may nonetheless choose to deploy the weapon.

Although there have been several United Nations conventions held to discuss the aforementioned ban on these weapons,241 the world has yet to come to a consensus on a plan for the rise of autonomous weapons. Without an enforced strict liability tort regime for accidental injuries sustained from killer robots, the possibility of catastrophic accidents that go without remedies would be incredibly high. Such a liability system would make a state think twice when choosing whether to use or deploy certain AI.

B. The Standard—Semi-Autonomous and Non-Lethal Weapons and Manufacturer Liability

By contrast, non-lethal fully autonomous weapons and semi-autonomous weapons will be better governed under a negligence scheme. Largely, these categories of weapons already exist, including Packbots,242 drones, and other UAVs. Unlike AI, these machines include some form of human guidance or support at some stage.243 This difference is fundamental to the analysis of the type of liability regime that should be implemented for these weapons because, unlike with fully autonomous weapons, the risk here can be avoided or at least minimized through human involvement.244

Notably, the Restatement (Second) of Torts provides that both a negligence approach and a strict liability approach can be applied when it comes to ultra-hazardous activities.245 This merely shows the normality in separating out the approaches for these two categories of autonomous weapons.

While a negligence standard may not have been appropriate with fully autonomous weapons, since even the most careful designer “could not . . . [necessarily anticipate] the decisions [an autonomous weapon system] might eventually make in a complex battlefield scenario,”246 it is more appropriate here. Autonomous weapons systems are designed specifically for independent decision-making.247 Semi-autonomous weapons systems, on the other hand, require human interaction. Therefore, a finding of negligence is much more likely against semi-autonomous weapons designers or manufacturers.

In reality, it will likely be difficult to establish or bring an action for a design or manufacturing defect in this scenario.248 This proposed regime does not provide a significant shift, but it will nonetheless provide reinforcement for personnel, despite any immunity or exemptions.

C. Issues to Consider: Sovereign Immunity Through the FTCA and MCA

The main obstacle to be addressed is sovereign immunity, which not only keeps states from being sued in foreign courts, but also eliminates state-to-state tort actions.249 While sovereign immunity exists within many nations, this Note will discuss its relevance only in the context of the United States.

At the forefront of exemptions made for cases that cannot be brought against the United States lies the Federal Tort Claims Act (FTCA). In theory, the FTCA exists so that civilians and wronged individuals can bring a suit against a government employee if they were wronged while the employee was acting within the scope of his duties.250 However, the FTCA also allows the United States to invoke any judicial or legislative immunity available to it in order to minimize the damages, or to eliminate a lawsuit altogether.251 Notably, the FTCA makes an exception for any intentional torts, ensuring that no such claims will be brought against the United States government.252

States typically do not take responsibility for tort actions, particularly in a war setting.253 However, if the United Nations does not decide on a preemptive ban on lethal autonomous weapons systems, then a regime will need to be put in place since humans will have less control over machines’ actions. If a system is not put in place, then there is a real possibility that these weapons systems will wreak havoc and cause harm to civilians and people who should not be in the line of fire at all.

Another consideration is that no civil liability claims can be brought under the FTCA where the government or its contractors are operating during wartime.254 However, if these weapons systems are being tested domestically, become uncontrollable, and cause an injury to a civilian, then a negligence action under the FTCA may be brought.255 Based on the FTCA’s exceptions, there are instances where a lawsuit can be brought. While it will undoubtedly be difficult to establish certain negligence actions around semi-autonomous and non-lethal weapons, there is room for breaking down the exceptions and creating a standard for these lawsuits.

Additionally, the United States military has its own Military Claims Act (MCA)256 that compensates individuals for damages caused by government activity.257 MCA claims are broken down into two categories: (1) injury or damages caused by military personnel or civilian employees acting within the scope of their employment; and (2) injury or damages caused by noncombatant.258 The second prong is irrelevant here. The first prong, however, may be relevant if AI systems are deemed to be military personnel or, at least, an extension of military employees. One major distinction between the MCA and FTCA is that, while the MCA applies worldwide, if a claim is denied, there is no right to sue. If the agency denies a claim under the FTCA, one can still pursue a lawsuit.259 MCA claims present similar challenges to those under the FTCA, such as the exemption of combat activities during times of war.260 However, that may create a claim under both statutes (assuming weapons are being tested and the situation goes awry).

Although it currently appears as though states can avoid liability under the FTCA and MCA if combatant activity goes amiss, the use of AI, and especially of fully autonomous weapons, would require a review of the activities included in these exceptions.261 This is especially the case because they do not have any meaningful human control, and many aspects of war need to be evaluated by humans before action is taken.262

Even if the aforementioned ban is implemented by the United Nations, the liability aspects will still be relevant for semi-autonomous and non-lethal weapons because these weapons will likely continue to be used.263 A liability scheme will be marginally clearer than that of fully autonomous weapons systems since semi-autonomous systems still adhere to the “man-in-the-loop” model.264 This does, however, bring about a discussion of potential regulation governing programmers and the standard to which they must adhere when initially creating the software for these autonomous weapons.265

CONCLUSION

Autonomous weapons systems can be enormously advantageous to military efforts across many nations, but there is also potential for unnecessary devastation.266 When there is both great potential for growth and destruction, an international standard for judgment surrounding potential horrors is necessary.267 Although it is unclear whether autonomous weapons will be preemptively banned, it is vital to prepare if they are not. A limited strict liability tort regime is the most versatile and customizable standard for judging these actions in the current climate. This will allow lethal fully autonomous weapons systems to fall under a strict liability regime, while semi-autonomous and non-lethal autonomous weapons will fall under a negligence standard. Obstacles such as sovereign immunity are not to be ignored but, if addressed properly, the outcome will be a functioning and sensible governing standard for war torts—actions that are becoming only increasingly more real.

 

 


* Senior Articles Editor, Cardozo Law Review, Volume 40. J.D. Candidate (May 2019), Benjamin N. Cardozo School of Law; M.A. with Honours, The University of Edinburgh, 2015. I would like to thank Professor Anthony Sebok for his unparalleled guidance and feedback; my Note Editor, Katherine Medianik, for continuously pushing this Note to greater depths; the editors of Cardozo Law Review for their hard work in preparing this Note for publication; my friends, particularly Pamela Blatkiewicz, Amy Chou, and Zack Freeman for selflessly reviewing multiple drafts; and most importantly, my parents, Boris and Galina, for their endless encouragement and support.