Impact Ipsa Loquitur: a Reverse Hand Rule For Consumer Finance

Introduction

The topic of this symposium—Automating Bias—considers how artificial intelligence can produce, reinforce, and hide racial and other forms of discrimination in consumer finance. The animating intuition is that the complexity and opacity of algorithms and artificial intelligence in consumer lending create a greater need for disparate impact analysis to combat lending discrimination. This view was articulated forcefully by the current Director of the Consumer Financial Protection Bureau (CFPB), Rohit Chopra, when he was still a commissioner at the Federal Trade Commission (FTC):

It is rare to uncover direct evidence of racist intent. That’s why disparate impact analysis is a critical tool to uncover hidden forms of discrimination, not only in this context but throughout the economy. Companies are collecting an ever-growing universe of personal data, and through sophisticated machine learning tools and other forms of predictive technology, this data can produce proxies for race and other protected classes. Often this discrimination is invisible to its victims, making it especially important that regulators work proactively to root it out. 

We share Chopra’s intuition and concern, but we would go further.

Algorithmic lending practices do not simply render discrimination “invisible to its victims.”The sources of discriminatory effects within an algorithm may similarly be invisible to regulators and private litigants who seek to protect borrowers, and therefore unprovable to the courts with jurisdiction to apply these regulations and statutes. Because machine learning tools are black boxes, even the lenders that rely on AI in their financial decision-making may not understand that these tools use broad-based data that may lead to decisions that adversely impact protected groups.

Accordingly, this Article makes three points. Two are relatively obvious extensions of Director Chopra’s insight. The third is novel. The first obvious point is that the black-box nature of AI, and the way it interacts with legacy discrimination, increases the need for disparate impact liability in the consumer lending context. The second obvious point is that algorithmic lending discrimination comprises more than credit denial. It also includes lending on discriminatory (or even predatory) terms (“algorithmic discrimination”), as well as algorithmic marketing of predatory loans to vulnerable populations (“algorithmic predation”). While AI increases the risk of algorithmic discrimination and predation, consumers have suffered earlier forms of this sort of discrimination since risk-based pricing became commonplace. The novel point is that the addition of unfairness jurisdiction as an overlapping statutory basis to combat bias in consumer lending fundamentally changes the way that rebuttal works in disparate impact cases. If a plaintiff raises an inference of either disparate racial impact or predatory lending practices (whether unfair, deceptive, or abusive), the defendant must be prepared to establish both that the algorithm is nondiscriminatory and that the resulting lending practices are fair.

As Director Chopra observes, intentional discrimination is comparatively rare and difficult to prove. This has been the case since well before the automation of consumer finance and other markets. At least since the Supreme Court’s decision in Griggs v. Duke Power Co., courts have viewed the prohibition against “discrimination” as including both discriminatory treatment and discriminatory effects.

AI increases the importance of a disparate impact analysis of lender behavior, both with regard to discrimination and predatory marketing. AI masks human agency and thus may mask intentional discriminatory treatment and conceal the reason for discriminatory effects. At the same time, lenders and other firms relying on AI are likely to assert that the standard set out in Griggs is satisfied with this sort of machine learning since AI is increasingly viewed as a legitimate business practice without less discriminatory options. As the CFPB and other regulators decide how their enforcement and regulatory powers should address algorithmic lending discrimination and unfair, deceptive, and abusive financial loans and loan terms produced through machine learning, enforcement agents must recognize that disparate impact analysis may work somewhat differently depending on whether the challenged behavior is credit denial, terms discrimination, or algorithmic predation.

Questioning the source, content, and structure of this standard is particularly important to the consumer finance setting because the CFPB asserts multiple statutory bases for its jurisdiction to police lending discrimination, including both antidiscrimination statutes, such as the Equal Credit Opportunity Act (ECOA), and the CFPB’s jurisdiction to regulate “unfair, deceptive, and abusive” acts and practices (its so-called “UDAAP jurisdiction”). Moreover, the CFPB shares jurisdiction over discrimination in lending markets with a handful of regulators with similar but distinct sources of jurisdiction, such as the Federal Trade Commission Act (FTC Act), Fair Housing Act (FHA), and Community Reinvestment Act (CRA). These questions are not academic. They are situated at the center of pending enforcement actions.

One example of an action brought to eradicate AI-driven predation in consumer lending can be found in a recent suit brought by the CFPB and the New York Attorney General against Credit Acceptance Corporation (CAC). There, the CFPB and the New York AG alleged that CAC used its lending algorithm not to predict the ability of a borrower to pay, but instead to “predict how much it [would] collect from consumers over the life of a loan—not just from consumers’ monthly payments, but also from potential collection efforts, repossessions, auctions, and deficiency judgments.” Based on the results of this algorithm, CAC would calculate the “price” of a car both to evade state usury limits and maximize its return on investment, regardless of the borrower’s ability to pay this price. On loans like these, lenders make money, even if the borrower defaults and loses the car.

This Article considers how disparate impact analysis can and should be translated into the context of algorithmic consumer and household lending, especially algorithmic lending that might also be viewed as extended on predatory terms. The burden-shifting formulas set out in Griggs and its progeny do not translate easily to the underwriting and marketing decisions inherent in consumer lending markets.This is especially true in markets for subprime loans, where applicants often are not denied credit for discriminatory reasons, but instead are granted credit along a range of contract terms that may discriminate among borrowers on the basis of race, color, or national origin.

The difficulty of proving discriminatory effects in consumer finance markets—especially the effects of algorithmic discrimination—is further magnified by the presence of algorithmic lending in important ways. First, “several government agencies and officials have recognized generally that AI systems can be infected by bias, have discriminatory impacts, and harm marginalized communities.” Second, because AI weighs many types of data, not just consumers’ credit histories and credit scores, it is far more difficult to determine which data points correlate with credit risk and which with race. Third, proof of causation is especially fraught in consumer lending markets powered by AI given the history of race-based lending in these markets. As a result, the lessons of critical race theory are central to understanding the dangers that algorithmic lending will ossify this racial history.

One solution to the problems created by the burden-shifting standards set out in Griggs is to supplement the various civil rights statutes by characterizing discriminatory effects as unfair, deceptive, or abusive acts or practices. The CFPB recently signaled its intent to rely on its UDAAP jurisdiction as an additional basis for regulating discrimination in consumer lending. Vested with similar jurisdictional authority, the FTC and various state attorneys general also have brought enforcement actions alleging that lenders, and those working in tandem with lenders, engaged in unfair and deceptive acts and practices (UDAP) with discriminatory effects. And yet, because statutory definitions of “unfair” and “abusive” practices might well be characterized as premised on balancing and burden-shifting standards resembling those in Griggs and its progeny, the turn to UDAAP may not be sufficient to resolve the problems that AI creates for prima facie proof of disparate impact.

In our view, this expansion of the statutory basis for challenging algorithmic discrimination requires reconsideration of the prima facie and rebuttal burdens of proof in algorithmic discrimination cases, regardless of which regulator is bringing suit and under what statutory regime. Disparate impact analysis in discrimination law must be viewed in context. Circumstantial evidence has long been relied on to establish intent and causation in other contexts. As early as 1601, so-called “badge[s] of fraud” were used to establish intent to “hinder, delay, or defraud creditors.” Similarly, the existence of certain “red flags” were used to establish a lack of “good faith” for the purposes of certain purchaser protections under the Uniform Commercial Code. Disentangling the balancing and evidentiary burdens associated with proof of discriminatory impact through facially neutral algorithms, we turn to the rules permitting a prima facie case of tort liability to be established through circumstantial evidence of, for example, intent, responsibility, or causation.

The classic example of burden shifting in tort law involves the doctrine of res ipsa loquitur, where the facts are said to speak for themselves; proof of such facts shifts the burden of going forward with the evidence and places this burden on the defendant. Viewed from the context of disparate impact, the burden-shifting formula first identified in Griggs similarly rests on the view that the discriminatory effects “speak for themselves”—impact ipsa loquitur. The definition of “unfairness” found in the FTC Act and Consumer Financial Protection Act (CFPA) also imply a similar prima facie presumption that “substantial injury” to consumers should subject relevant actors to a burden of justification.

Burden-shifting rules of this type have emerged in tort law to address information asymmetries associated with establishing negligence, and also to address problems of proving causation in cases involving joint tortfeasors, multiple causes, and baseline risks. Guido Calabresi labeled these as “reverse Learned Hand test[s].”

The same evolution has not occurred in disparate impact litigation, however. In fact, courts have doubled down on the need to establish numerous factual predicates, presumably without the benefit of burden shifting, an approach we view as wrongheaded, especially as applied to consumer finance. For example, in Texas Department of Housing and Community Affairs v. Inclusive Communities, a case on discriminatory liability under the Fair Housing Act, the Supreme Court notably emphasized that proof of causation remains part of the initial prima facie case of discriminatory effects. The role of causation is made particularly complex in housing and consumer lending markets, however, by decades of systematic and institutionalized discrimination in those markets.

This Article proceeds in three steps. First, it considers how artificial intelligence, algorithms, and big data can combine to facilitate predation and hide discrimination. Second, it explores the existing legal landscape and finds gaps in the relationship between discrimination and predatory lending doctrines. Third, it places both disparate impact analysis and UDAAP doctrines within the broader context of balancing and burden-shifting rules in tort law and considers how this analysis might be tailored for the new algorithmic lending environment. We conclude that predatory discrimination should permit consideration of two types of circumstantial evidence: evidence of disparate racial impact and evidence of unfair lending practices. We argue this because the legacy of systemic racism and the nontransparency of algorithmic lending together mean that  unfair (predatory) practices are likely to have discriminatory impact. To put it another way: discrimination in consumer credit is unfair; and that unfairness may well have disproportionate impact on victims of legacy discrimination and other vulnerable populations. Therefore, regardless of the doctrinal basis of the complaint or enforcement action, defendants in suits premised on algorithmic discrimination should be prepared to justify its use of an algorithm by presenting evidence of both racial neutrality and fairness.


* Cooper Family Chair in Urban Legal Issues, Fordham University Law School. **David M. Barse Professor, Brooklyn Law School