Adjudicating Algorithms: Accountability in Regulation of Surveillance, Privacy, and Discrimination

Introduction

The movement for accountable algorithms has attained critical mass. That momentum includes a range of areas where the collection of data plays a key role, including privacy, online disinformation, surveillance, and screening for credit, housing, employment, and government benefits. For example, the White House has released an Artificial Intelligence (AI) Bill of Rights that outlines standards and recourse for a host of AI applications that touch human needs and endeavors. Assessments, disclosure, and procedures for filing complaints about abuse are frequent features in this turn toward accountability. However, at least in the United States, accountability is still a set of tropes, rather than a consistent approach across domains. Even among scholars proposing reforms, disagreements and omissions are frequent calling cards. Scholars are divided about the utility of procedures for review of individual complaints, as ongoing controversy over the value of the Meta Oversight Board (OB) reveals. In addition, few, if any, accountability frameworks respond to the ubiquity of functional trade-offs between different values or even within values. For example, attempts to make algorithms more transparent can reduce their accuracy. This Article outlines a consistent approach, which the Article calls stewardship, for regulation of data abuse.

It is not surprising that unified approaches have not emerged regarding the whole spectrum of algorithmic activity, including foreign surveillance, data breaches, platform content moderation, and applicant screening. Each of these domains has a vast literature and disparate technical challenges. Moreover, each has a different set of regulators and stakeholders. On the regulatory front, the U.S. Federal Trade Commission (FTC) has long used its authority to litigate against unfair and deceptive practices to combat data breaches that expose the personal information of millions of customers. However, the FTC has no jurisdiction to monitor or regulate government surveillance, although the FTC’s new initiative on privacy suggests that it is open to issuing privacy rules that will go beyond its traditional limited enforcement role. Similarly, U.S. agencies that enforce antidiscrimination laws have thus far had only a modest role in addressing algorithms that have discriminatory effects in access to credit, housing, or employment. This balkanization also reflects the United States’ traditional sectoral approach, in which regulation of individual sectors such as the power grid predominates, while no single agency has overarching jurisdiction. 

Time is overdue for an approach that will work across sectors and subject areas whose common feature is algorithmic activity. Others have called for a unified approach in the United States that resembles the approach of the European Union (EU). However, these proposals often suffer from a hostile view of adjudication and skepticism toward the most prominent example of adjudication, the Meta OB. This Article takes a fresh look at very recent decisions of the OB in the Cross-Check and Gender Identity and Nudity opinions recommending limits on bias in Meta’s content moderation policies and practices. It also articulates a broader view of stewardship that fits different sectors. 

Stewardship flows from the view that regulators, including entities regulating themselves, are fiduciaries for the data and communications they oversee. That fiduciary conception does not necessarily include the vast array of technical obligations that common law prescribes. However, it does entail three robust norms: iterative review, layered accountability, and institutionalized opposition. Iterative review refers to adjudication that considers the validity of systems and programs. A programmatic approach installs practices as a benchmark, such as the Meta OB’s recommendation of content-moderation policies that do not discriminate against marginalized groups. Iterative review also monitors compliance with those benchmarks. Layered accountability supplements adjudication with public disclosure and broad participation by interested parties. Institutionalized opposition establishes a voice within each subject domain that counters the narratives of technology companies or the government. To apply the stewardship model, this Article considers three challenging areas: (1) transatlantic data privacy and government surveillance; (2) data breaches and privacy in the private sector; and (3) regulation of applicant-screening algorithms regarding credit, housing, employment, and government benefits.  

This Article refines earlier approaches in several ways. First, it provides a broad conception of programmatic adjudication encompassing both individual and aggregate cases. Some prominent scholars have derided individual adjudication of algorithmic activity as narrow and inconsequential. This Article argues, in contrast, that adjudication counts as programmatic because of its effects on practices and discourse. Those effects matter more than the procedural vehicle that the adjudication takes. In making this argument, the Article relies on recent Meta OB decisions that previous scholarship has not had a chance to address. Second, this Article seeks to disrupt the silos that have hitherto contained analysis of algorithmic activity. Most previous works in the literature have addressed one topic such as privacy and corporate data practices, instead of the entire landscape, which also includes government surveillance and applicant-screening algorithms. That focused approach can yield depth of insight but can also miss common threads linking topics together. Third, because this Article provides a comprehensive overview of the algorithmic landscape, it also addresses functional trade-offs between fields that much other work misses, such as the trade-off between preserving privacy from hackers’ exploits and curbing government surveillance of hackers’ activity. Through these three refinements, the Article helps inform current debates.  

This Article has five Parts. Part I outlines the problems of government surveillance, cyber breaches and disinformation, and algorithmic screening. Part II outlines proposed measures for resolving those problems, including standards, disclosure, and assessments. Part III considers procedural safeguards, centering on the Meta OB’s recent decisions. Part IV sets out the stewardship approach, centering on iterative review, layered accountability, and institutionalized opposition. Part V applies the stewardship model to the proposed Data Protection Review Court on EU complaints regarding U.S. government surveillance; private-sector data breaches and online disinformation; and algorithmic applicant screening. 


* Professor of Law, Roger Williams University. B.A., Colgate University; J.D., Columbia Law School.