Racial Recognition

Introduction

Every human is unique. We all look, sound, and walk differently from one another. These unique physiological and behavioral characteristics could be translated into biometric identifiers. With developments in the intersection between biometric identification and Artificial Intelligence (AI), algorithms are now capable of measuring and analyzing unique human characteristics, such as fingerprints, palm prints, irises, faces, voices, gaits and gestures, typing patterns, and handwriting, for verification and identification purposes.1 With the rise in computational capabilities, such recognition technology is increasingly used by interested parties to combat terrorism, authenticate flight passengers or identify undocumented migrants at airports, or by the market for various consumer-related purposes, like increasing security or simply making products and services more accessible or enjoyable.2

The use of recognition technology for purposes of identification, perhaps most notably facial recognition, has now entered the realm of criminal enforcement. Within their efforts in the prevention, investigation, detection, and prosecution of crimes, law enforcement agencies on both the federal and state levels began using facial recognition technology early in the twenty-first century.3 With further advancements in the field of AI, such use has quickly spread across police departments in America.4 When attempting to identify culprits, police officers, including federal agents, are now positioned to feed an algorithm with a suspect’s image, which can be matched against databases containing images of individuals and would then produce a list of potential matches.5

While the use of recognition technology by law enforcement agencies is likely an important tool for maintaining public safety and security, it is also highly troubling from human rights and liberties perspectives.6 This technology, most notably facial recognition, is constantly and systematically proven to be erroneous—making many inaccurate identifications (false positives).7 Such inaccuracy, as researchers continuously prove, is not equally spread between cohorts, making dramatically more false identifications for women than for men and, in the context of this Article, for Black people than for white people.8 While such use of recognition technology digitally places nearly half of Americans, along other foreigners, in a perpetual lineup,9 it more dramatically affects those who systematically tend to suffer from racially biased enforcement within the realm of criminal law, duplicating and potentially amplifying these mistreatments.

This Article examines the effects of combining recognition technology with criminal enforcement—tainted with racist algorithms, datasets, and decision-making—defining it as racial recognition. As further discussed, racial recognition might stem from biased and often homogenous developers of recognition algorithms and services,10 tainted training data and datasets,11 and both institutional and individual police racism evident throughout history and ongoing in current American society.12

As this Article further suggests, a biased system combined with institutional or personal racism within police work might not only perpetuate racial bias, but rather it is likely to increase mistreatment of marginalized communities, often Black people, legitimizing legal action against them, thus increasing racial disparities and social control over these communities. While there are some recent initiatives to place moratoriums on the use of facial recognition by law enforcement agencies,13 and by both private entities and a few state legislatures,14 such legal intervention is currently insufficient in regulating racial recognition, whereas this technology might soon be normalized and structured within daily police work.

The time is ripe to directly address the concerns of racial recognition on the federal level before it becomes too late. This Article thus analyzes how to properly regulate the use of recognition technology, most dominantly facial recognition, for purposes of suspect identification by criminal law enforcement agents, while focusing on the racial aspects that unregulated use of this technology will perpetuate and amplify. This analysis is composed of three main Parts.

Part I provides a general taxonomy of criminal enforcement, further divided into five eras—locating the use of recognition technology within the fourth era of digital policing. This Part further explores developments in the field of biometrics and recognition technology to set grounds for discussing the racial aspects of recognition technology within the context of criminal enforcement.

Part II introduces the rise of racial recognition—how recognition technology and its use by enforcement agents could be embedded with racism. It begins by scrutinizing the general racial aspects of recognition technology as they currently unfold and continues by zooming out to the realm of criminal law to examine how the combination of recognition technology and racism becomes highly troubling within the realm of criminal law enforcement.

Part III turns to discuss and analyze the regulation of racial recognition and offers viable solutions to this conundrum. It begins by providing an analysis of the legality of using recognition technology within criminal law, while considering its proven racism. It does so by dividing the discussion between constitutional protections and other laws and regulations that are prime candidates to either directly or indirectly regulate police work, algorithms, and datasets in the context of racial recognition. Upon concluding that the current legal landscape is insufficient in regulating racial recognition properly, Section III.B discusses the inevitable entrance of recognition technology for purposes of identification in the future and how to slow it down, further dividing the discussion between legal and non-legal modalities. This Section then offers a conceptual blueprint for policymakers, which details the three stages and steps that policymakers must follow to properly legalize the use of recognition technology in the near and inevitable future, while focusing on human rights and liberties in general and of those affected by this technology—mostly marginalized and over-policed communities.

I. Technology’s Growing Role in Criminal Enforcement

Criminal enforcement and technology share a meaningful history. With new technological developments, it became natural for law enforcement agencies to seek ways to implement and use these innovations to aid in performing their legal mandates of maintaining public safety. Naturally, equipping enforcement agents with new tools to combat crime has had many benefits but has also raised concerns of misuse. To better understand how recent developments within the intersection of AI and biometrics could be misused against some cohorts, this Part discusses the growing role of technology in criminal enforcement generally and then turns to further explore biometric developments, as well as recognition technology, in order to set the grounds for discussing their racial aspects and potential misuse.

A. Criminal Enforcement and Technological Innovation

Technology began to assume a role in criminal enforcement roughly two centuries ago. It began in what some termed as the Political Era,15 lasting from 1840 to 1920, during which the state granted the police access to new weapons: along with inventions like the nightstick, police officers began using more sophisticated weapons, like Colt’s first multi-shot pistol.16 New developments in communication technologies also joined the array of tools used in criminal investigations, and enforcement agents in many states were technically, and often legally, positioned to wiretap telegraphs and telephones to obtain data on what individuals were writing or saying.17

Then came the Professional Model Era, roughly lasting from 1920 to 1970, in which inventions like the polygraph and fingerprint and handwriting classification systems revolutionized criminal investigations.18 This era saw many developments. In its early days, crime laboratories arrived to America.19 In the mid-1930s, American police began using automobiles and two-way radios.20 In the late 1940s, traffic law agents began using radar speed guns.21 And finally, beginning in the 1960s, computers assumed a more significant role within police efforts to combat crime, as exemplified in the formation of the Federal Bureau of Investigation’s (FBI) National Crime Information Center (NCIC), which enabled many police departments across America to connect to a computer for the first time.22

Many consider the 1970s as the beginning of a third era in criminal enforcement—some have dubbed it the Community Policing Era.23 At that time, American police departments entered a large-scale computerization phase, which included computer-assisted dispatch management information systems and a nationwide centralized call collection (the famous “911” system).24 Technology also made it possible for enforcement agencies to use new innovations, like soft body armor, night-vision devices, pepper spray (as a force alternative), and tasers, among other inventions made readily available and legal for police use.25

While current police practices could be located somewhere in the midst of this third era, this Article frames new technological developments within a fourth one—that of big data, hyper-connectivity, and AI. This era, dubbed here as Digital Policing, continues the role that technology played within law enforcement agencies, but its growth might be exponential.26 The starting point of Digital Policing could be traced to the early 1990s, with the invention of the public internet.27 Since making its debut, the internet has developed into a communication tool for many individuals. In a relatively brief time, data storage has become accessible and cheaper, while computers have become more mobile and affordable for almost anyone to use.28 With the rise in connectivity and storage, and along with other technological developments, the growing field of AI has enabled individuals to go online in new and exciting ways, which in turn has paved new pathways for enforcement agencies to locate and investigate crimes.29

Public infrastructure has also become more connected. The public sphere has gradually become awash with sensors of various kinds—more cameras and other sensors that could be used for investigating crimes or enforcement in real time.30 For example, these sensors and cameras aid enforcement agencies in detecting license plates to obtain timestamps and locations of cars;31 body-worn cameras (BWCs) can aid law enforcement investigations by capturing both video and audio of those in the vicinity of an agent;32 drones are used to obtain an aerial view of a crime scene or in real time to locate suspects;33 gunshot detection technologies are deployed for detecting, recording, and locating the sound of gunfire;34 and, as a final example, predictive policing uses AI to work more efficiently and proactively in policing.35

The combination of these and other technological developments are likely to substantially impact how criminal enforcement is reshaped and, one might suggest, eventually lead to a fifth conceptual era of enforcement: that of Autonomous Policing. One day, maybe even during our lifetimes, the unfolding of the so-called Industrial Revolution 4.0 might, in turn, make enforcement autonomous and machine-reliant.36 Autonomous policing robots might one day partially or fully enforce the law. The field of warfare is already showing evidence of heading in this direction, as we begin to witness the automation of militaries in some parts of the world.37 Criminal enforcement might thus follow suit.

Currently, however, while some autonomous security robots already exist,38 we are not yet within the era of Autonomous Policing. We might witness the partial or even full automation of many instruments, including perhaps vehicles or other “things” that shape our lives. But these developments have not yet been directly translated into the American criminal realm—not fully at least. Automation is relatively still in its infancy, with the example of the promised driverless or autonomous car that currently seems far from reaching its goals or fulfilling its potential.39

Still, there are vast differences in policing within the fourth era. Policing in 2021 is different from policing in 1990. Policing, for one, has expanded far beyond the kinetic world, as cyberspace has become a playing field for criminals to act in.40 But aside from such moves that necessitated institutional changes in police practices, new technological advancements that are intertwined with biometric analysis are becoming valuable new tools in policing. As Section I.B further shows, somewhere within the curve of Digital Policing, the combination of biometrics and the wide spread of data and advancements in AI are already beginning to reshape criminal law enforcement.

B. Biometrics, Recognition Technology, and Criminal Enforcement

Identifying suspects and locating witnesses to crimes is integral to criminal enforcement. Criminals are likely to flee the crime scene, often leaving law enforcement agents with few or no clues as to their identity. Identification efforts could be directed, inter alia, toward gathering forensic evidence, like that of deoxyribonucleic acid (DNA)41 or fingerprints.42 Other efforts will likely be directed toward the questioning of victims or eyewitnesses, if any exist and are able to assist. These victims or eyewitnesses might be able to, inter alia, describe the culprit, sometimes to a composite sketch artist, for the police to publish the sketch or use in another way to aid in solving the crime.43 When attempting to visually pick a culprit from a police lineup, enforcement agents might not merely rely on people’s memories and identification of faces but also on identification of other biometric features, like voices or even body gestures.44

Biometrics entered criminal investigations prior to digital technology. Even within the Political Era of policing, and as early as 1888, the police used various methods, such as anthropological classification, to identify culprits.45 Fingerprinting, now an indispensable part of policing, was used in American criminal investigations as early as the early twentieth century.46 Almost three-quarters of a century later, computers entered the world of biometric identification, and in 1975, the FBI began using fingerprint readers.47 Realizing the potential of computerized biometrics, the FBI’s Criminal Justice Information Services Division initiated the Integrated Automated Fingerprint Identification System (IAFIS) in 1999, which provided, inter alia, digital tenprint and latent fingerprint searches.48 These, along with other forensic and investigative techniques, began to assume integral roles within the criminal justice system.49

Perhaps the biggest leap in biometric identification, the one that is at the heart of this Article, is that of recognition technology. Biometric features of people’s faces, voices, or bodily gestures (often gait), among other biometric features, can now be analyzed not simply by humans but by computers to match them with potential culprits.50 Such practices of verification or identification51See Garvie, Bedoya & Frankle, supra note 1. can even include searching for non-biometric features linked to suspects, such as tattoos, clothing, shoes, or any other identifying characteristics relevant for investigation. The imperfect memory of humans can now be replaced with the allegedly exact—and almost endless—memory and capabilities of computers52 for detecting and analyzing both physiological and behavioral characteristics.

Recognition technology can apply to many bodily and non-bodily features. One current dominant technology is that of facial recognition, focusing on various facial features, like eyes or nose, measuring the distance between them; using various datapoints within one’s face, such as skin, shadows, or other attributes; or matching faces as a whole.53 Other notable forms of recognition technology consist of iris recognition,54 voice recognition,55 and gesture recognition.56 Recognition technology can also extend beyond classical biometric identification to include the use of physiometrics, like the measurement of heart rate or blood pressure; the use of anthropometrics;57 and the recognition of tattoos.58 To use recognition technology for purposes of identification, one must obtain a sample of the target’s recognizable feature, a database that contains identification features of the population, and an algorithm that produces potential matches between the created template of the target’s recognizable feature and the stored identification features present within the database.59

The promises of recognition technology for identification and verification, perhaps most dominantly these days that of facial recognition, are already implemented and affect many areas of our lives. The military and intelligence agencies reportedly use facial recognition tools to identify possible terrorist suspects, especially overseas.60 The Transportation Security Administration (TSA) uses it in airports as a more efficient form of checking and verifying travel documents, e.g., for verifying overstayed visas,61 or simply for passenger identification purposes.62 Immigration and Customs Enforcement (ICE) agents use it to target undocumented immigrants.63 And most likely, its primary use in terms of quantity comes from the market, sometimes within surveillance capitalism,64 but also within its efforts to increase security65 or to make products and services more convenient and enjoyable for consumers.66

Within the realm of criminal enforcement, it is hardly surprising that recognition technology is already deployed in totalitarian regimes.67 But it is not merely reserved for these regimes.68 Facial recognition technology is known to have been used by many enforcement agencies worldwide at least since the beginning of the twenty-first century.69 Such use is also becoming more integral in the international criminal domain, in which Interpol operates and maintains a facial recognition system (known as IFRS) that contains facial images from most parts of the world.70 And America is no exception. During the 2001 Superbowl in Tampa, Florida, the police admittedly used facial recognition tools to locate subjects of outstanding warrants, in perhaps the first reported event in America.71 At roughly the same time, police departments across America were reported to have begun using facial recognition technology.72 Officially, the New York Police Department (NYPD) reported its use of facial recognition since 2011,73 while the Detroit Police Department has been using it since at least 2017.74

On the federal level, the United States maintains and operates various biometric identification systems and programs.75 In 2011, the FBI began using the Next Generation Identification (NGI) system, which replaced the previously mentioned fingerprint system (IAFIS).76 NGI includes a facial recognition search in which an authorized law enforcement official can submit a “probe” photo to be matched against mostly federally generated images, like mugshot repositories (accompanying criminal tenprint fingerprints and a criminal history record).77 The FBI also operates a facial recognition unit named Facial Analysis, Comparison, and Evaluation Services, or simply FACE, which can compare a facial image (probe photo) to a database comprising driver’s licenses and other ID photos,78 along with other state photo repositories, such as criminal mugshots or corrections photos, obtained from databases of several U.S. states.79

Enforcement agencies have become eager to use recognition technology and, perhaps more commonly for now, facial recognition.80 To run a search, other than obtaining the suspect’s identifier and the technological tool that enables identification, there must be a database within which to search a suspect’s identifier. The FBI’s NGI system contains an electronic repository of biometrics along with criminal history information.81 Similar systems can also be found on the state level.82 And by 2016, police facial recognition databases were reported to have the photos of roughly half of the adult population in America,83 while various reports indicate that enforcement agencies are continuously building enormous databases with millions of individuals’ photos.84 Enforcement agencies often share data between various state and federal departments and agencies, and such sharing might eventually increase the already large datasets of biometric data accessible to police officers.85 These internal biometric databases, the legality of which will be further discussed in Part III, might thus be composed of various sources most notably from arrest photos and criminal-related activities.86

Within their efforts to improve their abilities, law enforcement agencies also turned to the private market’s power. Already heavily invested in the technology, the market was ready to provide identification tools for anyone to use, including law enforcement agencies.87 Some market players directly targeted law enforcement agencies as customers, offering them biometric or biological identification tools services. Amazon, for instance, pushed its face identification system (Rekognition) to enforcement agencies,88 emphasizing that it “could aid criminal investigations by recognizing suspects in photos and videos.”89 And such private technology is not a rare exception. Many federal and state law enforcement officers reportedly used private facial recognition apps or services in their efforts to identify suspects.90

The market soon expanded to include not only the technology but also the database. Companies like Clearview AI began offering enforcement agencies services of comparing probe photos submitted by the police—not only against limited state-owned databases but also against billions of images scraped from Facebook, YouTube, Venmo, and other online sources.91 Until recently at least, Clearview AI worked closely with thousands of police agencies across the United States alone,92 while other foreign companies in this field had also been reported to work closely with American law enforcement agencies.93

While outside this Article’s main focus, what already came into play in some areas, and could likely expand without regulatory barriers, is the police use of recognition technology in real time. If the public sphere becomes awash with sensors, then it might also be intertwined with real-time recognition technology, e.g., identifying the faces or other features of those in the public sphere.94 Such live facial recognition is already used in China and by enforcement agents in London, United Kingdom.95 While it is unclear how many American jurisdictions have incorporated live facial recognition in public places,96 Detroit is reported to use such recognition technology in conjunction with its $8 million Project Green Light that includes more than seven hundred high-definition cameras scattered across the city.97 This Article, however, focuses mainly on proactive recognition rather than real-time recognition. And while a glimpse into the near future suggests that recognition technology could be much broader,98 facial recognition is the most dominant for now.

Surely, the use of biometric identification might always sound troubling, but when it comes to criminal enforcement, it could carry dire consequences for people’s rights and liberties. And as Part II shows, the use of recognition technology, especially facial recognition, dramatically affects some cohorts more than others. The next Part thus discusses the threats that recognition technology raises in the context of criminal enforcement, which has been systematically proven to be flawed and, more specifically, biased toward misidentifying some cohorts, most notably in the United States, Black people.

II. Racial Recognition Threats

Due to various legal instruments that prohibit or limit the use of recognition technology, limited governmental funding for using these technologies, or other market-related reasons, recognition technology is not yet fully implemented within the American criminal system. But where in use, and with a glimpse to the near future, it has many negative consequences on the rights and liberties of individuals and, specifically in the context of this Article, is a tool that might be used disproportionally for targeting specific cohorts, most profoundly Black people, who have historically been treated differently by enforcement agencies.99 The use of recognition technology by enforcement agencies thus raises fears of mistreatment and increasing social control on marginalized communities.

This Part introduces the rise of what this Article calls racial recognition—that is, how the combination of law enforcement and recognition technology is likely tainted with racism. This term broadly incorporates the use of race, ethnicity, or national origin as a factor used by enforcement agents within any police practice relating to recognition technology. To do so, Section II.A scrutinizes the general racial aspects of recognition technology as they currently unfold, while Section II.B focuses on the realm of criminal law to show how the combination of recognition technology and racism becomes more than highly troubling within the realm of criminal law enforcement.

A. Bias and Racism Within Recognition Technology

Technology by itself is neither inherently biased nor racist. But it is hardly neutral either.100 Algorithms and software are still mainly constructed and programed by humans. Under the AI branch of machine learning,101 computational outcomes depend on their algorithms, the data that they are trained on, and the data fed into the system. This is where potential bias enters first: data and algorithms often reflect choices about connections, inferences, and interpretations.102 If the coder, the training dataset, or the data used contains some forms of explicit or implicit bias, then the output of the software will likely replicate or amplify this bias.103 Bias in often means bias out.104

As machines are not inherently biased, it is humans that form such bias. Unfortunately, humans are probably the most biased organisms on this planet.105 Computer bias begins with the architecture of systems. When coding, humans are likely to reflect their own priorities, preferences, and prejudices.106 Adding to this conundrum, algorithms are often written by homogeneous developers.107 Namely, American, white men, benevolent and without explicit bias as they may be, will often be influenced by cognitive shortcomings, like a tendency to recognize faces more easily within their racial group, also known as a cross-race effect.108 Simply put, algorithms and codes might reflect both explicit and implicit biases and cognitive failures stemming from their developers, often linked to their own race or ethnicity. Human bias in recognition technology continues with the datasets used to train the system. AI-based algorithms learn from data and compute results accordingly. If the dataset is already tainted with bias or otherwise fails to accurately represent cohorts in society, then the algorithm will likely produce inaccurate or biased outcomes.109

With its promises, inaccuracy is still a major problem in recognition technology.110 The American Civil Liberties Union (ACLU), for instance, proved that Amazon’s Rekognition was so inaccurate that comparing twenty-five thousand public mugshots to Congressmembers resulted in falsely identifying twenty-eight of them as having previously been arrested by the police.111 London Metropolitan’s automated facial recognition system was proven to be wrong most of the time.112 Adding to this problem, while some of these findings were performed under a closed environment in which the photos used were clear and well lit, poor-quality photos, which might be prevalent in criminal investigations, were proven to increase inaccuracies.113

And the problem is not merely inaccuracy. Studies continuously prove that facial recognition discriminates based on classes like age, race, and gender.114 And while bias could exist for various cohorts, facial recognition software has been proven to be systemically biased against those with darker skin, providing more false positives than for others.115 Such bias forms partially due to lack of diversity,116 as datasets have an over-representation of white men.117 When white men’s data comes in, the computation of white men’s data comes out.118 Another reason is that Black people’s images in databases are often of lower quality than those of white people.119 To put things more simply, facial recognition was proven prone to make more misidentifications—false positives—when applied to Black people.120

And it is not just false positives but also systematic biases embedded within some of these systems, often caused by developers. For example, some early versions of face-tracking web cameras by Hewlett-Packard were found not to detect Black people.121 Google’s facial recognition software was notoriously known for categorizing two Black men as gorillas.122 Twitter’s neural network was proven racially biased when prioritizing those with lighter skin within cropped preview timelines.123 This bias applied not only to humans but also cartoon characters and even dogs, preferring the light-colored over the dark-furred.124

Racial bias exists in other recognition technology as well. Researchers recently proved similar misidentifications with voice recognition technology,125 better understanding white males than others.126 Gesture recognition technology—identifying bodily movements—was also found to better compute the gestures of males than of women and children, primarily because these systems were trained on men aged eighteen to thirty-five.127

While there is much promise in these technologies, they do not yet live up to their promises.128 Aside from general accuracy and misidentification problems, the use of recognition technology might perpetuate racial bias that already exists in the real world. Applying this fear to the realm of criminal enforcement, in which racial bias and misuse are inherent within the American system, will make enforcement even more flawed and increase means of social control over marginalized communities. To better understand the optimal tradeoff between law enforcement needs and the implications of the error-prone recognition technology, Section II.B turns to discuss why recognition technology is highly risky within the context of law enforcement.

B. Racial Recognition Within Criminal Enforcement

The perpetuation and potential enhancement of racial bias becomes more evident within the use of recognition technology. Either programmed with inherent biases or trained or fed with biased data, the use of recognition technology is troubling not only in its commercial aspects but more so in the context of criminal law—the most coercive and liberty-limiting instrument of the law.129 If we rely on biased systems in the criminal justice system, then crucial decisions on who should be suspected, arrested, indicted, incarcerated, or paroled will be discriminatory.

The discussion on using recognition technology in criminal enforcement thus begins with misidentification in general—creating false positives—meaning that the algorithm matching the suspect’s image, voice, or gesture with a dataset produces incorrect outcomes. While these mistakes might not sound dramatic in commercial applications, say if a browser or social media account misidentifies a turtle as a rifle,130 it becomes highly troubling when someone is arrested simply because technology misidentified him or her.131

The fear of misidentification increases when error rates are higher for some cohorts, like Black people. To be fair, as the National Institute of Standards and Technology (NIST) demonstrated, it is not only Black people that produce higher error rates in comparison to white people within recognition technology but also Asians and Native Americans, women more than men, and older adults more than middle-aged ones.132 While issues like gender bias are no less important to research and fight against,133 racial errors have higher implications in the context of criminal law, as Black people tend to suffer more from biased enforcement, thus forming the focus of this Article.

As established, racial recognition begins with homogeneous developers (mostly white men). It continues with racial bias from flawed datasets. Here is where the source of the training data will highly impact the inherent bias of the system.134 The system will likely be fed with trained data that might cause systematic misidentification for Black individuals more than for white individuals. Next comes the dataset used to identify the suspect. Here is where it would highly depend on whether the police are using their own internal databases or if they are using an external database, often scraped from the internet. If the police use their own datasets, then some cohorts, like Black men, might be at a severe disadvantage, as these datasets are often compiled from arrest records and police-generated images, like mugshots, that initially contain higher rates of Black people compared to the rest of the general population,135 thus making them more susceptible to the use of these biometric systems, and thereby, also more prone to mistakes in identification.136 Thus, Black people are more likely than others to be in the compared dataset, meaning that they have higher chances of being identified from this database.137 Using the internet as a source for the dataset will have to be examined specifically in light of potential biases within it. In that instance, the internet might be advantageous for identification purposes (unlike for data-training purposes) for Black people, as they are generally less represented online.138

Adding to this alarming list of biases are institutional and individual police racism. Criminal enforcement is often prone to target communities of color, and along with various unlawful misconduct,139 racial disparities in policing have been statistically proven in many police practices.140 Race and ethnicity, for that matter, play an unfortunate and historic role within the criminal justice system, as many researchers have proven.141 Predictive policing, the application of analytical techniques to predict crimes and identify targets, was also found to be used disproportionally against historically over-policed communities.142 Simply put, when it comes to policing, some groups, most notably Black people, receive dissimilar treatment.143

Such dissimilar treatment is another factor that must be considered within the notion of racial recognition on both institutional and individual levels. The institutional level mainly represents how racism on the individual level was historically translated into police practices that generally mistreat minorities and communities of color.144 For example, such institutional bias could stem from general police practices or guidelines affected by individual bias. Specifically in the context of recognition technology, institutional decisions on where to place cameras (to obtain probe images of suspects) or where police officers equipped with BWCs patrol (regardless of individual bias) could highly impact what data enters the system to begin with.

From an individual perspective, the fact that, statistically, there are other biased humans in the system—police officers—will increase the likelihood for racism. While surely not every police officer is racist, the history of racial policing raises a substantial fear. This fear not only represents another factor that could increase chances of racism and dissimilar treatment between cohorts but also implies that the use of recognition technology within the realm of criminal enforcement is risky regardless of whether the technology yields inaccurate or biased results.

In other words, accuracy is only one problem within racial recognition. Even if the algorithm learns how to “de-bias” its results, it does not “de-bias” humans. The combination is alarming, as a biased system combined with a racist law enforcement agent or institutional racism might not only perpetuate racial bias and increase social inequalities but might even become a tool for legitimizing legal action against some people—all within what might appear as a justified and legitimate cause of action.

These technologies might thus be misused to violate human rights and, most notably, target minorities and communities of color.145 Consider the broad implications of misusing a technology that could specifically target individuals based on their age, gender, and skin tone.146 China was reported to use facial recognition technology for racial profiling against the Uighur Muslim minority,147 while reporters indicate that the private sector is developing a technology that would enable the government to receive alerts when detecting individuals from this group.148 A substantial fear is that enforcement agencies will digitally point their efforts within minority neighborhoods and equip them with more cameras to be used and misused.149

And this misuse could extend far beyond the realm of criminal law. The use of recognition technology in the public sphere might impact many constitutional or basic human rights, such as privacy, free speech, free association, free movement, and due process.150 Abusing the power afforded by this technology could potentially lead to identifying public protesters or people at rallies in general,151 which are probably at unprecedented levels these days.

Thus, racial recognition affects much more than selective enforcement of Black people. It might be misused as a powerful tool for social control over anyone, but most likely marginalized communities and perhaps those supporting them in pursuing their causes. Think of protesters that are targeted by these technologies.152 This becomes frightening especially when these protests are exactly due to police racism, such as those that arose from the killing of George Floyd in 2020.153 Not surprisingly then, minorities are less trusting than the general public of police use of facial recognition technology.154

One of the fears of using this technology under the umbrella of public safety is that of becoming a surveillance state. Suppose, arguendo, that the police use recognition software to obtain full surveillance on individuals.155 In its examples of its Rekognition software, Amazon demonstrated how the input photograph could come from a body camera worn by a police officer.156 The state might thus use an array of available technological tools like BWCs,157 which were adopted in part to reduce discrimination,158 or drones equipped with technology like portable spying devices.159

And this is where undocumented immigrants or Black activists might again be at a severe disadvantage compared to the general population.160 In practice, both Miami police and the NYPD used facial recognition to track down Black Lives Matter activists.161 Baltimore police were reported to use facial recognition to identify protesters by linking their images to their social media profiles.162 In a weekly public report, the Detroit Police Department admitted using its facial recognition software against Black people in ninety-seven percent of its facial recognition requests.163

Notably, the fear of racist algorithms in criminal law enforcement was raised in the context of risk assessments, well before the advent of recognition technology.164 The proprietary risk assessment AI system known as COMPAS, a tool designed to assess recidivism that is often used in sentencing by some courts and affirmed by the Wisconsin Supreme Court,165 was proven to yield biased results against Black people,166 among other problems.167

In the context of recognition technology, while enforcement agencies including the NYPD claim there were no false arrests due to facial recognition misidentification,168 there is already proof in other departments. In June 2020, a Black male was falsely arrested in Detroit due to facial recognition.169 Only upon spending thirty hours in custody was the suspect released on bail, with the charges eventually being dropped.170 Only a few days later, another Black man was misidentified.171

Thus, the use of recognition technology, primarily facial recognition, could lead to discrimination against some people and, perhaps most profoundly, Black people. When racial disparities already exist in policing, and when technology and data are tainted with racism, crucial decisions become even more discriminatory than within traditional policing. Even if we eliminate any embedded biases within the system—something currently highly improbable—there are still checks and balances to be set when using this technology to ensure it is not misused against some groups or individuals.172 In other words, algorithmic recognition technology in use of law enforcement might become racial profiling tools. They might operate to identify race, not face, thereby improperly correlating race to criminality, and they might replicate and exacerbate systemic inequalities and discrimination against individuals on the basis of race. Part III focuses on the current legal regime that governs racial recognition and then turns to discuss how to properly regulate it.

III. Regulating Recognition Technology in Law Enforcement

Police use of recognition technology raises a host of legal questions, with emphasis on its negative impact on human rights and liberties. While recognition technology could aid police officers in performing their legal mandates, the risks of errors and misuse might halt the use of recognition technology for criminal enforcement purposes, at least until its perceived shortcomings can be substantially reduced. But as the future, in this respect, is just around the corner, the use of recognition technology for identification necessitates rethinking the permissible framework that law enforcement must act within, including the problems that stem from the use of this technology, even if it becomes “neutral” or otherwise unbiased per se.

A. The Legality of Recognition Technology Within Criminal Enforcement

There are many benefits of data-driven police practices in general and of recognition technology specifically.173 If it functions properly, recognition technology could aid law enforcement agents in identifying suspects and subsequently in the investigation and prosecution of crimes.174 It could also aid the police in identifying suspects of crimes when victims are unable to do so, e.g., Alzheimer’s patients.175 Recognition technology could even aid in rescuing human trafficking victims, finding missing children, or aiding disoriented individuals.176 Recognition technology could also aid in reducing concerns about bias, as these systems often perform automated tasks based on numeric analysis of features and patterns, regardless of race.177 In a utopian future, recognition technology could aid in increasing public safety while potentially reducing the effects of inherent human biases and prejudices in enforcement.

To some extent, it might sound farfetched that the police would be denied the possibility of identifying suspects or finding missing individuals.178 But while denying the police from using any technology that would aid them in identifying suspects might strike some as unnecessarily raising barriers for proper police work, the drawbacks of racial recognition, along with other human rights and liberties that might be violated, are too significant. To understand what regulatory gaps need to be filled in the legal regime that governs racial recognition, the following Section discusses constitutional protection on the one hand and federal and state laws on the other.

1. Constitutional Aspects

The legal analysis of racial recognition begins with the Constitution or, more precisely, with some of its amendments. The use of recognition technology in criminal enforcement raises constitutional concerns around the rights granted under the First, Fourth, Fifth, and Fourteenth Amendments.179 Unfortunately, as further discussed, these amendments are currently highly limited, mostly inapplicable, in regulating racial recognition.

The first candidate for governing racial recognition is the First Amendment. Depending on how recognition technology is deployed, one concern regarding law enforcement’s use of recognition technology for purposes of identification is that it will create a chilling effect on freedom of speech and association as protected by the First Amendment.180 When the police use such technology, it might infringe upon individuals’ First Amendment rights to anonymous speech,181 as individuals might refrain from engaging in expressive action or association out of fear of being associated and identified within a specific context—often political or religious.182

As this Article focuses on identification, when it comes to merely identifying individuals with recognition technology, it is not clear if the First Amendment could be invoked. Merely identifying individuals is not a First Amendment violation per se, and even surveillance of speech might not violate the First Amendment.183 Moreover, even if the use of this technology will lead to retaliatory arrests, as one might fear within the context of racial recognition, it would be highly difficult for plaintiffs to prevail on a First Amendment claim, as the existence of probable cause would defeat such a claim.184 Thus, without using live facial recognition to locate individuals in the public sphere, the First Amendment would be rather limited to regulating racial recognition in the context of identification.

The second candidate is the Fourth Amendment, which protects against unreasonable searches and seizures—often a primary tool to regulate police conduct.185 As the Supreme Court ruled in Katz v. United States, violating a reasonable expectation of privacy constitutes a search, which then requires a warrant,186 unless an exception exists.187 The reasonable expectation of privacy test is both subjective, i.e., whether an actual expectation of privacy exists, and objective, i.e., whether society recognizes the expectation as “reasonable.”188

Subsequent case law has clarified when a search is considered reasonable. After Katz, the Supreme Court began widening the scope of the Fourth Amendment to digital technology.189 In the context of criminal law, police officers are allowed to stop suspects and frisk them when they have a “reasonable suspicion” (a lower standard than probable cause) that the person has committed, is committing, or is about to commit a crime.190 When making decisions, police officers must “point to specific and articulable facts which, taken together with rational inferences from those facts, reasonably warrant that intrusion.”191 They must have an individualized suspicion; otherwise, there is no probable cause for the search.192 Police officers can also make mistakes in identification, especially if the decision is “an objectively ‘reasonable good-faith belief’ that their conduct is lawful” or when it is an isolated event of negligence.193 They cannot, however, make intentional or systemic errors.194

When not taking technology into account, recognizing one’s face, voice, or gestures does not constitute a Fourth Amendment search.195 When no physical invasion occurs, inspection by the naked eye is generally permissible.196 In addition, subjective racial discrimination in conducting a search has been held irrelevant to the Fourth Amendment, especially when the search was otherwise lawful.197 Generally speaking, race is considered outside the scope of the Fourth Amendment.198

Still, the main question within the analysis is that of privacy expectations in the context of identification when taking technology into account. This analysis begins with the probe photo of the suspect as technologically captured.199 Even lacking suspicion, the mere capturing of such information is reasonable when obtained from a public place, which has traditionally afforded fewer privacy protections than the private sphere.200 While the Supreme Court famously noted that the Fourth Amendment “protects people [and] not places,”201 there is little expectation of privacy when individuals go out to a public place to begin with,202 that is, unless they make efforts to shield such information from public view.203 It is generally also legal for anyone, including law enforcement agents, to photograph people in public.204 Perhaps the smartification of the public sphere will lead to different conclusions in the future; as articulated by Chief Justice John Roberts, “[a] person does not surrender all Fourth Amendment protection by venturing into the public sphere.”205 Currently, we generally surrender our faces, voices, and other biometric features when going out.

Second is the question of searching within databases. If the police use their own databases, whether composed of mugshots, driver’s license photos, passport photos, or otherwise legally obtained photos from investigations, such use is unlikely to constitute a Fourth Amendment violation.206 Unless otherwise barred by state law, law enforcement agents can legally use Department of Motor Vehicles photos.207 If the police legally obtain publicly available online information, as they often do, then searching within such a database is generally permissible.208

Other databases might be privately owned. Under current Supreme Court jurisprudence, it is not a Fourth Amendment violation when the data held by third parties is voluntarily shared with them. Under this so-called third-party doctrine, there is no reasonable expectation of privacy within these images, which in turn are exempt from the protections afforded by the Fourth Amendment.209 This analysis might change if the algorithm produces more than mere identification, i.e., reveals metadata and information about suspects that is embedded within the media (such as location details or other contextual information about them).210 Here, the public might be unaware of such metadata and might not expect that it would be divulged to third parties. In the case of mere identification, this becomes less relevant, however.

Finally comes the algorithm used by the police. As these technologies are often offered by the market, a valid Fourth Amendment claim might arise if such “sense-enhancing” technology is not provided to the public.211 On the one hand, the public can use various recognition services.212 One such example is PimEyes, a website that grants anyone the opportunity to upload a probe photo and search within its own database that is composed of people’s faces scraped from the internet.213 Truly, if functioning properly, websites like PimEyes pose difficult Fourth Amendment questions. If they are widely available, barring the police from using such websites seems not only impractical in the sense that it would be highly difficult to prove such use, but also unreasonable, as such use would be expected by the public. It would be difficult to argue that such a search is unreasonable, considering the availability of the service. Still, websites like PimEyes must be compared to the use of more sophisticated technologies to examine if the public has access to similar tools.

A final query lies within partnerships with companies like Clearview AI. While the third-party doctrine would invalidate a Fourth Amendment claim against such a practice, the difference here would be that Clearview AI scraped the internet for the photos likely in violation of the terms and services of such websites. In other words, while users likely consented to some third-party use of these photos, they did so under these terms and services. Still, it seems that, here as well, it would be the role of other laws, like contract law, to regulate such misuse of biometrics. Overall, the Fourth Amendment is highly limited in regulating racial recognition.214

The next candidate for constitutional protection could be equal protection rights granted under either the Due Process Clause of the Fifth Amendment215 or, more likely, under the Equal Protection Clause of the Fourteenth Amendment.216 The Fourteenth Amendment grants equal protection under the law, and due process might be violated in the context of racial recognition when it is biased and used in a discriminatory way or when recognition technology is incorporated into decisions without an individual’s knowledge that it is being used.217 But suing might be challenging, as one will need to prove a discriminatory effect and purpose.218

Discriminatory effect is allegedly easier to prove in this context. As this Article argues, the statistical or otherwise aggregated data for accuracy rates of the technology and, more specifically, against some minorities could aid in proving discriminatory effect.219 But discriminatory purpose is a different story and is highly difficult to prove.220 Plaintiffs will have to prove that the decision against them, or their cohort, was intentional and personal.221 Institutional racism or any racism embedded within the technology or the dataset is insufficient for discriminatory purpose, as it does not directly target individuals. Statistical discrimination might not be sufficient either.222 Even awareness by the enforcement agent to the consequences of using a provenly biased technology will not suffice to prove discriminatory purpose.223

To prove discriminatory purpose, plaintiffs will have to focus on at least one of the two human decisions in the process—whom to use this technology on and what to make of the outcome. That would be highly difficult to prove, as a racist police officer will not likely reveal that his decision was based on racism, masking it behind other rationales that invoked the use of the recognition technology.224

The second hurdle is that of the human mind. Many people might act in discriminatory ways without knowing so, due to cognitive shortcomings and biases.225 While some might act on what they think is a reasonable good-faith belief, unconsciously it might not be.226 For example, people tend to categorize and stereotype to make quick (and efficient) decisions—including racial stereotypes.227 Moreover, even unbiased police officers might suffer from automation bias, in which they will over-rely on (statistically flawed) outcomes of such searches.228 Overall, whether consciously or not, it would be highly difficult to prove discriminatory purpose.229 Adding to this conundrum, it will be challenging for individuals to seek civil damages of an alleged constitutional rights violation under the qualified immunity doctrine, especially when it relies on human error.230 Thus, equal protection as a constitutional remedy currently offers little protection against misuse by officers or the algorithm, thus failing to protect against racial recognition.

The Constitution is thus limited in governing racial recognition.231 Perhaps it is not even within its mandate, as police work is often largely governed by statutes that regulate specific technology-related aspects of policing, like wiretapping or access to stored wire and electronic communications,232 or by internal rules and guidelines.233

2. Laws and Regulations

While several congressional initiatives were proposed in the past and some are still ongoing, federal law currently lacks direct rules or regulations for the use of recognition technology.234 States and several municipalities recently became more active in this sense. While not necessarily applicable to all recognition technology, some states regulated the use of facial recognition technology by government entities;235 some cities banned facial recognition within specific technologies, like BWCs;236 and others banned governmental use of face surveillance or facial recognition technology by non-governmental entities more broadly.237 It remains to be seen if other states will follow this line, but notably, these laws are directed mostly at facial recognition, leaving other potential recognition technology aside for now. Still, directly banning the use of recognition technology by enforcement agencies becomes the most efficient way to tackle its threats, even if only temporarily, as Section III.B will argue.

Other than specifically targeting facial recognition, a few potential federal and state laws might regulate racial recognition as well. Prime candidates are antidiscrimination statutes, which exist in many fields, e.g., housing and employment laws restricting the use of factors like race, gender, disability, or age within decision-making.238 On the federal level, the Civil Rights Act of 1964, which applies to police departments receiving federal funds,239 prohibits discrimination based on race, color, religion, national origin, or sex.240 Another candidate is the Safe Streets Act, which prohibits the police from acting with a racially disparate impact.241 There are also some antidiscrimination laws on the state level that might regulate racially disparate impact within police work.242 Unfortunately, antidiscrimination laws are rarely used to create any systematic change within already discriminatory and racist police practices.243 While some argue that disparate impact laws must apply to criminal enforcement,244 the disparate impact doctrine is not yet considered part of criminal enforcement.245

In addition, these laws only regulate parts of the bias problem and do not address the technological black box of the process—that is, the regulation of algorithms and trained data. Thus, another regulation pertains to databases, and even more directly, to those containing biometric data. The collection of biometric data from foreign nationals in airports, when they depart or enter the country, was legalized over time,246 and the state is also not directly prohibited from collecting and storing biometric information.247 Unauthorized collection, use, and disclosure of information (including biometric data) could fall under the Privacy Act of 1974,248 or other privacy-related state-enacted laws or constitutions.249

The 1974 Act has many limits within the regulation of recognition technology. First, it applies only to federal entities, exempting state and local governments along with private entities.250 Even for federal entities, the FBI has issued a final rule implementing an exemption from the Act for its NGI biometric database.251 Second, the 1974 Act applies only to American citizens, excluding companies, non-resident aliens, and other foreigners.252 Finally, it sets many exemptions that might include recognition data.253

The E-Government Act of 2002 is another potential candidate within this realm, requiring federal agencies to conduct Privacy Impact Assessments (PIAs) when running programs or information technology systems that collect, maintain, or disseminate personal information.254 These PIAs should aid the agencies in evaluating the privacy risks to individuals, while offering potential protections to them, and it should be updated when new privacy risks arise.255 The problem, however, is that these PIAs are not updated as frequently as they should be, and the public is still largely unaware of the use of recognition technology by enforcement agencies.256 While important, such assessments must have regulatory teeth to succeed and can only lightly regulate racial recognition.

The relevant datasets are also not merely state owned, as the police might use privately owned datasets, whether by voluntarily asking for aid from companies or by using targeted recognition services offered by the market, e.g., Clearview AI. Generally, from the aspect of private companies’ collection, while some federal laws might be somewhat applicable to biometric data,257 such data is mainly regulated on the state level.258 Companies will often adhere to privacy-related laws and regulations that might specifically relate to the use of biometric data and even facial recognition.259 The most notable example is Illinois’s Biometric Information Privacy Act (BIPA), which requires private companies, inter alia, to inform and obtain written consent from those whose biometric identifiers are collected or stored,260 while also prohibiting profiting from such biometric data.261 Texas,262 Arkansas,263 and Washington264 also have specific biometric laws, and there are other state privacy laws that might regulate biometric data to some extent, the most comprehensive of which are those of California.265

Aside from datasets, it remains to be seen how courts view partnerships between enforcement agencies and private companies that offer recognition services. Vermont filed a lawsuit against Clearview AI for, inter alia, unlawfully acquiring data from consumers and businesses, thus violating Vermont’s Data Broker Law.266 There are also class action lawsuits against Clearview in various states,267 and some states, including New Jersey, barred police officers from using its services.268

Current law is thus limited in properly regulating recognition technology in the context of law enforcement—that is, without resorting to a complete ban on its use like some jurisdictions have begun to do.269 But as Section III.B argues, while such moratoriums are important, they are also temporary solutions. Properly setting the legality of recognition use requires a more holistic and realistic view of the benefits of this technology and, more specifically, directly targeting the main problems of racism embedded within the identification process. This Article discusses such solutions in Section III.B.

B. Slowing Down the Inevitable Future of Criminal Recognition

“Laws have to determine what’s legal, but you can’t ban technology. Sure, that might lead to a dystopian future or something, but you can’t ban it.”270

The deployment of AI will likely change and reshape many aspects of our lives, and criminal enforcement is unlikely to remain in the stone age of AI.271 It would be somewhat inevitable for the police to use these new technologies in light of the history of law enforcement’s adoption and use of novel technological policing tools.272 The question is how. How should policymakers craft proper rules for the use of AI by law enforcement agents? As this field advances quickly, should democracies in general, and the United States specifically, place a moratorium on any use of recognition technology by enforcement agents, at least until proper barriers are in place? Can the State properly regulate identification by recognition technology, and, if so, how? This Section provides a conceptual blueprint for policymakers and discusses how the market and society might be part of the equation.

1. The Direct Roles of the Law in Regulating Racial Recognition

Much like some state or municipal initiatives, the most straightforward regulatory response to regulate racial recognition would be placing a ban or at least a moratorium on its development or use. Many opine that this option is a just and feasible way to handle the potential drawbacks of recognition technology within the realm of the State.273 Selinger and Hartzog took such an approach further, proposing a ban on facial recognition in the private sector as well.274 And as this Article has argued, there are many good reasons to do so, as the potential harms of using recognition technology within the realm of law enforcement might currently outweigh its benefits.

But it is highly difficult, if not impossible, to stop an innovative technology once it is already deployed. Once the cat is out of the bag, there is little governments can pragmatically do to put it back in, especially with the significant economic values and other benefits that this technology provides. It should not be all or nothing in this respect. Even without banning the technology, there are proper ways to use this technology without becoming a dystopian surveillance state.275 In this surveillance state, there would be little escape from data-driven technologies that mark the future of criminal enforcement, even before entering the perceived fifth era of Autonomous Policing.276

To clarify again, this Article focuses on the possible applications of recognition technology as a tool for identification—not as a surveillance tool. If the use of recognition technology constitutes, by any means, surveillance on individuals that extends beyond mere identification, then such use must be strictly banned by Congress before becoming ubiquitous.277 Any China-like live surveillance or tracking in real time must never be permissible by any democratic state and should be deemed unconstitutional regardless of the racial aspects that this Article seeks to address, as they defeat the core purposes behind rights afforded by the Constitution.278

While not without drawbacks, identification is a different story, and much like the use of other biometrics, it should be gradually allowed upon passing several regulatory steps and ensuring a proper legal framework to govern its use. Thus, when discussing identification, policymakers must tackle any risks of using this technology for social control and misuse against specific cohorts or marginalized communities or any misuse in general. Unfortunately, as Section III.A showed, the constitutional protections that might aid in reducing the risks of racial recognition are currently limited. And while the aspects of racial recognition should fall under any antidiscrimination provision on the constitutional level, it is unlikely that courts will agree given current jurisprudence, and it will be upon plaintiffs to prove a constitutional violation on an individual basis. This is not sufficient by any means. As argued by Selinger and Hartzog, we simply cannot wait for the Supreme Court to update its privacy protections in this regard, as by then, the use of such technology might become ubiquitous.279 We must not wait for new judicial interpretations of the Fourth Amendment in this context, or, in the words of Justice Alito in Riley v. California, “it would be very unfortunate if privacy protection in the 21st century were left primarily to the federal courts using the blunt instrument of the Fourth Amendment.”280

To address racial recognition, we must regulate the use of recognition technology through laws or other regulations that are adaptive to changes. Here, too, it is crucial to regulate these technologies more broadly. To date, policymakers and academics have focused almost solely on facial recognition within their proposed laws or analyses but seldom mention its applicability to other recognition technology like voice or gesture recognition. While facial recognition is a more imminent threat than other recognition technology, this reality could swiftly change,281 and policymakers must adhere to a technologically neutral approach as to accommodate how recognition technology—like that of voice or gesture—might be shaped and used in the future.

Regulating recognition technology necessitates following three regulatory stages. The first stage is to broadly stop its current governmental use. Banning any governmental use of recognition technology is crucial, as the ramifications of continuing to use this technology without further studying its drawbacks and how to mitigate the risks that stem from its use might eventually normalize it. Congress must act without further delay to pass a moratorium on any use of such technology by any state department until further notice, while also ceasing their funding.282 If used within any criminal proceedings, previous or ongoing, it must not be considered reliable or admissible evidence.283

Upon banning the technology, Congress can proceed to the second stage—studying how to craft an exception for law enforcement for purposes of identification. This must be done on the federal level, operating under a single federal office,284 as city or statewide legislation and regulation are merely bandages.285 This is highly important, as a single regulatory body that operates on the federal level will be an efficient and optimal way to avoid inconsistencies, thus creating standardized police practices in this field.286

Within and outside the federal office, Congress should continue its efforts to study the benefits and drawbacks of using this technology in the context of identification, while preparing to legislate a proper and effective regulatory framework to govern it. Within this tradeoff, it is highly important to study if, and to what extent, the use of recognition technology is fruitful in aiding law enforcement agencies.287 Only upon proving that the benefits of using this technology outweigh its drawbacks, assuming for now that it yields accurate results, can Congress craft exceptions to the ban.

The third stage is the creation of such a regulatory framework. This framework should address the concerns raised in this Article, i.e., how to improve the accuracy of outputs and how to reduce biased or otherwise flawed decision-making within the process. It should apply to discrimination of any kind. But before directly addressing the problems of racial recognition, there are several regulatory steps that should accompany such regulations.

The first relates to biometric datasets in any sector. Generally, biometric data must be regulated on the federal level, that is, until all states provide at least a minimal level of protection to regulate such conduct by private entities. As these datasets are highly sensitive, it is crucial to deploy proper security measures.288 Databases could be hacked, and highly sensitive biometric data could then be compromised.289 Thus, while some states, including Illinois, went along the path of regulating biometric data within the private sector, it is crucial that the United States follow other regimes, like the European Union, in protecting sensitive data more holistically within privacy regulations or at least within the regulation of law enforcement.290

Within regulation of racial recognition, accuracy is a real problem that must be directly addressed. It begins with asymmetrical data gaps that must be filled.291 No one should be able to use recognition technology of any kind, for any reason, if the training data or datasets used lack sufficient diversity or statistically misrepresent cohorts of any kind.292 At a minimum, algorithms must show equal (and low) error rates between different groups.293 Accuracy could be promoted by various regulatory mechanisms. It could begin with standardization, e.g., that public entities like the NIST will issue standards that apply to any provider of recognition technology or standards to which datasets must adhere. While the NIST already runs a series examining facial recognition under its Face Recognition Vendor Test program, its reports are currently not mandatory and are thus insufficient.294 In the words of Andrew Ferguson, the State can “require testing, auditing, and third-party certification requirements and forbid use if the technology does not pass the test,” all within the stage of product development, and all of which should be conducted by independent researchers.295

Specifically regarding bias, Ferguson argues that these systems must be tested on how they are applied to “people of different races, ethnicities, genders, ages, or other demographic characteristics” and that “the training data and on-going data being fed into the system should be revealed.”296 Factually, auditing large training datasets for embedded bias might be difficult to accomplish, as recently demonstrated by some researchers.297 But there could be other accompanying solutions to improve accuracy. Some initiatives, for instance, call for developing a “Nutrition Label” for datasets, i.e., creating a label that will grant overview of dataset “ingredients.”298 Other ongoing congressional proposals suggest requiring private entities to study and fix their algorithms if they make inaccurate, unfair, biased, or discriminatory decisions.299

Still, accuracy of facial recognition (and other recognitions) will likely improve over time, depending, inter alia, on various factors, like lighting, angles, and quality of the images.300 Within the context of algorithmic decisions, inherent biases must also be governed. The problem with algorithms in this context is their black box nature. Without knowing how the algorithm works, as they are often complicated and proprietary,301 it will be difficult to ferret out any hidden discrimination within it.302

This is where transparency steps in. Frank Pasquale argued that the answer to black box discrimination is transparency.303 But transparency of algorithms, while often suggested as a remedy even in the criminal context,304 might be a rather limited solution for various reasons: algorithms might be manipulated; transparency may compromise trade secrets;305 and perhaps most importantly, they may be too complicated to understand or not reveal much about the decisions made.306 To address this, Anupam Chander suggested focusing on transparency of inputs and outputs rather than on how the algorithm operates.307

Still, it is important to remember that the human brain is also somewhat of a black box, and while algorithms and data might not be fully transparent, there are still ways to provide transparency of the process and the outcomes.308 It would be wise, for instance, to design these systems with oversight and accountability of the designers and the design from the start.309 Even without disclosing or fully understanding the biases that are potentially embedded within the algorithms, Congress should strive to find ways to diversify not only datasets, but also those who develop these algorithms—thus reducing fears of biased homogeneous developers. The diversification of training data and datasets must also be promoted by Congress, while remembering that willingness to share biometric data must be based on informed consent, as it might otherwise infringe upon individuals’ privacy.310

Then comes the question of how to regulate misuse of this technology, even when accurate. After all, these systems are suggestive, and humans must enter the loop in evaluating their outcomes,311 which might also amplify the bias.312 Eliminating human bias will be nearly impossible. But there are many steps that could be taken to reduce the negative effects of such bias within decision-making. While it would be wise to increase external and internal checks and balances within police work in general,313 the State must issue and regularly update institutional guidelines, followed by adequate and enforceable sanctions for any misconduct by agencies or agents.314

Such institutional guidelines, or any use of this technology in general, must be placed under constant scrutiny. This could be achieved by mandating meaningful and enforceable transparency and oversight. First, anything that relates to the use of recognition technology by enforcement agencies must be fully transparent to anyone this technology was used upon and, more broadly, to the public, including comprehensible explanations of how these systems operate and which datasets are used.315 Here we must have greater transparency on which datasets are used, to enable proper research on the ramifications of using such datasets on human rights and liberties.316 This also applies to the algorithms used; as suggested by Andrew Selbst, before adopting any technology, the police should be required to perform “algorithmic impact statements. . . . [that] publicly detail the predicted efficacy of and disparate impact resulting from their choice of technology and all reasonable alternatives.”317 These choices must also be strictly disclosed within any court proceedings.318 While some cities have enacted legislation or regulations to increase transparency in this context, it is crucial that these requirements be codified on the federal level.319

More specifically, searches that are made by enforcement agents should never be completely invisible to others.320 And the results of other potential suspects that enforcement agencies receive must also be transparent to those affected by the software.321 In terms of transparency, Ferguson calls for an annual public report that could reveal information about how such technology was used and on whom.322 But such transparency must be made both on the public and individual levels, as mentioned.

Oversight must also be placed on the shoulders of the judiciary. Much like Congress did with wiretapping or stored communications for law enforcement purposes, Congress must place limitations on the use of this technology within the discretion of courts. Congress should require enforcement agencies to obtain a warrant for any use of this technology.323 Ferguson proposed a “Probable Cause-Plus Standard,” much like for wiretaps, “requiring an assertion of probable cause in a sworn affidavit, plus declarations that care was taken to minimize unintended collection of other face images, and that proper steps have been taken to document and memorialize the collection.”324 Others have suggested setting a condition on recognition use by enforcement agents based on a showing of “an individualized suspicion of criminal conduct” and limiting its use to investigations of serious offenses.325

Without delineating the exact threshold for now, placing the judiciary as another barrier against misuse of these technologies is crucial to reduce chances of mistreatment. The judiciary will be tasked with ensuring that no constitutional violations occur, most notably violations of the Fourth Amendment, i.e., that enforcement agencies do not misuse their mandates to surveil individuals. Thus, courts will be able to consider how intrusive this technology becomes on the individual level, as they could examine the number of times this search was conducted on a given individual.

The market should also be regulated. Aside from generally regulating biometric datasets,326 companies that provide recognition services should be barred from working with enforcement agencies without court orders. Such bans on the sharing of biometric data with other parties could be mandated to some extent by legislation.327 The government should also join hands with the market in developing best practices or standards, to which both the market and the government must adhere.328 But such partnerships must be strictly confined, as these collaborations are troubling to begin with. For one, law enforcement agencies might divulge sensitive data about investigations, and this information might be misused or compromised in another way.329 Second, there are fears that the State will pressure companies to develop best practices or standards that fit them better. Here, it might be advisable for the U.S. Federal Trade Commission to use its authority to prohibit unfair and deceptive practices that include racially biased algorithms.330

Notably, creating such a regulatory regime will not solve all the inherent problems with racial recognition. Misuse could still occur but would likely be substantially reduced. One of the remaining issues would be misuse that bypasses these barriers or unauthorized use, especially when this technology is still under moratorium. This problem is mostly jurisdictional in scope, as the internet could offer free identification of individuals to anyone, including police officers. This might be a valid concern. Blocking or censoring websites is not only an undesirable form of control, but it will also likely be deemed unconstitutional.331 And these websites or services might be governed under other legal regimes—PimEyes, for example, is governed by EU and Polish laws and regulations—meaning that the United States will be left depending on foreign legal concepts. Here, strong data protection regimes might aid in assuring that companies are unable to scrape the internet for photographs—or at least discouraged from doing so—thus making it impossible for them to grant any type of biometric service without risking heavy fines. PimEyes is likely in violation of the General Data Protection Regulation (GDPR) and is expected to face fines accordingly.332

But even if websites like PimEyes are governed by the GDPR, perhaps the most comprehensive form of privacy regulation in the world, what will prevent other websites or services from reappearing when operating far beyond the reach of non-democratic countries? This could be handled to some extent by various forms of economic or political pressure, e.g., sanctions against States that do not regulate the private market’s offerings of such technology—all without direct censorship. Similar sanctions could be placed directly on companies that aid totalitarian regimes, especially if these companies have financial or other interests in operating more globally.333 The problem is that when authoritarian needs exist, there will be a market to fulfill them.334 Thus, this is where regulation might fall short, leaving private companies to fight against any scraping of their data by competitors.335

What about exceptions to the use of this technology for national security or simply for aiding in non-criminal law purposes? If, for instance, America was under a terrorist attack, given the technological abilities, would we deny real-time use of recognition technology to find the terrorist(s)? Would we deny the use of these technologies to locate missing individuals—whether disoriented or abducted? This is not easy to answer. On the one hand, it seems implausible to bar the use of these tools for such purposes, and, at least for terrorism, the vague regulatory regime might be permissive of such uses already.336 Still, with the fear of misuse, policymakers must issue direct rules that govern such use and place proper barriers against any potential misuse. And Congress must also regulate the use of recognition technology to identify those in imminent danger.337 Unlike national security, and as an example, a court could more easily grant a “special” warrant for a missing child, especially when the consent of the caregiver is self-evident.

It is crucial, then, that Congress begin regulating racial recognition by placing a national moratorium on any use of recognition technology for any criminal-related task. Currently, despite regulatory suggestions and local moratoriums, the use of recognition technology is largely ungoverned by the law. This is where the power of society and the private sector come to the rescue. As the following Section argues, until such regulatory stages are completed, it is upon the market and social norms to slow down the governmental use of recognition technology before it is normalized.

2. Joining Forces: The Indirect Role of Markets and Society in Slowing Down Racial Recognition

The market of recognition technology is rapidly growing. While once almost a taboo for big tech companies,338 today almost every major tech company runs some sort of facial recognition program along with heavy investment in other recognition technology.339 And as mentioned, some markets, like those created by Clearview AI and by PimEyes, directly enable the use of recognition technology by enforcement agencies or by the general public, respectively.

With such a rise in capabilities, corporate social responsibilities— practices and policies undertaken by corporations intended to have a positive influence on society—are also on the rise in recent years.340 The field of AI and ethics is now becoming broader to include guiding principles for ethical AI, like transparency, justice and fairness, non-maleficence, responsibility, and privacy.341 And the market is increasingly becoming a powerful tool to regulate behavior, as the rise in the involvement of tech companies within the regulation of AI is already spreading.342

Until Congress takes proper measures to regulate racial recognition, it might be up to the market and society to slow down its implementation—nudging the State to promptly respond to it. Aside from general involvement in the regulation of AI, many of these companies are involved behind the scenes in regulating and drafting facial recognition laws, as, one might argue, they are motivated to ensure that the law does not negatively impact their business models.343

But other than nudging Congress directly, the market might also self-regulate by placing barriers on the use of such technology, which in turn could influence policymakers to regulate it. This has already begun. On June 8, 2020, IBM announced that it will cease to offer “general purpose facial recognition or analysis software” and will not develop or research the technology for now, largely due to its potential misuse by enforcement agencies.344 It was only two days later that Amazon halted the use of Rekognition by law enforcement agencies for one year, to give Congress sufficient time to properly regulate the ethical aspects of its use.345 Microsoft did the same a day after Amazon.346 Other companies have also declared that they will take further steps to fight against bias and discrimination within their platforms more generally.347 While it is almost impossible to know the exact motivations behind these moves, they may prove to be important on the path to regulation.

Currently, the State seems to be reliant on the market, which is good news in the context of slowing down racial recognition. Sure, the State has its own capacity to produce and engage in the development of recognition technology, but companies like IBM, Google, Amazon, and Facebook are likely to be more advanced in this field than the government. Until the State develops its own capacity within this realm—and it might be heading in that direction—it is somewhat dependent on the market to provide it with recognition tools. Thus, at least in theory, the market has some power over the State. IBM even announced that it wishes to work closely with Congress “in pursuit of justice and racial equity.”348 The problem is that these players are increasingly becoming a drop in a sea of facial recognition companies, while some might not even be major players in this field.349

Adding to efforts to reduce racial recognition, market players can also work to diversify datasets, making sure they will be compiled from a mix of ethnicities, genders, and ages, for the use of anyone constructing AI systems, and for those who practice in recognition technology specifically.350 Diversifying the internet in general is a worthy pursuit regardless of recognition technology, and some initiatives currently focus on this important task.351 Some market players might even generate solutions that would, at a minimum, make automated biometric recognition less possible.352

These exemplify steps that could slow down racial recognition, and other initiatives might emerge from the private sector, as some already have, if Congress does not step in. Still, the law should be the prime candidate to properly regulate the use of recognition technology in general and, more specifically, its proper use within the context of law enforcement. Without belittling the debate on ethics or corporate responsibilities in the age of AI, private interests must not be the guardians of criminal enforcement. It is the State that must regulate this realm.

To some extent, the suggested stages presented in this Section are also insufficient to address the bigger question that lies within the heart of this Article—that of human racism. After all, if there were no inequalities in society and humans were not biased, then the development and use of recognition technology for purposes of identification would not raise such issues. Unless society dramatically changes its norms, discrimination will keep reappearing in different forms. In the words of Chief Justice Roberts, “The way to stop discrimination on the basis of race is to stop discriminating on the basis of race.”353 Until such a utopian future arrives, it is upon Congress to actively fight against any form of discrimination and, perhaps most importantly, in the realm of the most liberty-limiting instrument in the arsenal of the State—that of criminal law.

Conclusion

Recognition technology, along with other advancements in the field of AI, might one day aid in making the world a safer place to live. A world awash with cameras and sensors could dramatically aid in detecting and solving crimes and perhaps reducing crime rates accordingly. It might even prove to be a tool that could greatly increase the public’s trust in enforcement agencies, leading to increased transparency and accountability. Unfortunately, the current reality is that the use of recognition technology by enforcement agencies, and facial recognition as a primary example, will likely be discriminatory in nature toward specific cohorts—most profoundly, minorities and those with Black skin. We must stop this use now before it becomes an embedded norm within police work. We will be bound to take such racism for granted, and mistreatment will be amplified through new innovative technologies.

There are steps to be taken, and they must be taken now. This Article suggests a conceptual blueprint for policymakers on how to tackle the problems that arise from the use of this technology, under what was termed racial recognition, which, one can hope, will be taken into consideration in the ongoing and difficult policymaking that this conundrum deserves. While this Article mainly focuses on facial recognition, researchers and policymakers must continue to closely examine the embedded biases that stem from any use of recognition technology and, perhaps most profoundly, that of voice and gesture recognition. With a rise in the use of technologies that constantly capture our images, our voices, how we walk, the way and speed at which we type, or any other identifier, Congress must be ready to quickly respond to the threats that they might bring and guard them from being used in the realm of criminal enforcement.


* Associate Professor, Faculty of Law, University of Haifa; Faculty Member, Center for Cyber, Law and Policy (CCLP) and Haifa Center for Law and Technology (HCLT), University of Haifa. I thank Michael Birnhack, Olga Frishman, Einav Hadas Tamir, Rotem Kadosh Nussbaum, and Jill Presser for their insightful suggestions and comments. I am also grateful to Tomer Antshel, Gabriel Focshaner, Hadar Gilboa, Idan Mor, and Aviv Toby for their excellent assistance in research.