Algorithmic UDAP

📅 2025-12-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper examines legal regulation of algorithmic discrimination in fair lending, comparing the disparate impact (DI) and unfair, deceptive, or abusive acts or practices (UDAP) frameworks. Using a simulated lending environment, formal legal modeling, and cross-paradigm empirical analysis, it systematically demonstrates— for the first time—that UDAP constitutes an independent analytical framework: its “unfairness” criterion emphasizes avoidability and proportionality, while its “deceptiveness/abusiveness” criterion identifies algorithmic harms invisible to DI analysis. The study confirms UDAP’s distinctive capacity to detect concealed algorithmic harms but also reveals significant interpretive ambiguity surrounding its core legal concepts in algorithmic contexts. These findings provide regulators with both theoretical grounding and practical cautions, advancing the precision of legal governance for algorithmic fairness.

Technology Category

Application Category

📝 Abstract
This paper compares two legal frameworks -- disparate impact (DI) and unfair, deceptive, or abusive acts or practices (UDAP) -- as tools for evaluating algorithmic discrimination, focusing on the example of fair lending. While DI has traditionally served as the foundation of fair lending law, recent regulatory efforts have invoked UDAP, a doctrine rooted in consumer protection, as an alternative means to address algorithmic discrimination harms. We formalize and operationalize both doctrines in a simulated lending setting to assess how they evaluate algorithmic disparities. While some regulatory interpretations treat UDAP as operating similarly to DI, we argue it is an independent and analytically distinct framework. In particular, UDAP's "unfairness" prong introduces elements such as avoidability of harm and proportionality balancing, while its "deceptive" and "abusive" standards may capture forms of algorithmic harm that elude DI analysis. At the same time, translating UDAP into algorithmic settings exposes unresolved ambiguities, underscoring the need for further regulatory guidance if it is to serve as a workable standard.
Problem

Research questions and friction points this paper is trying to address.

Compares legal frameworks for evaluating algorithmic discrimination in lending
Formalizes disparate impact and UDAP doctrines in simulated lending settings
Identifies ambiguities in applying consumer protection standards to algorithmic systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Simulated lending setting formalizes UDAP and disparate impact
UDAP introduces avoidability and proportionality for algorithmic fairness
UDAP captures algorithmic harms beyond disparate impact analysis
🔎 Similar Papers
No similar papers found.