Audit Me If You Can: Query-Efficient Active Fairness Auditing of Black-Box LLMs

📅 2026-01-06
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high query cost of auditing systemic group biases in black-box large language models (LLMs). To this end, the authors propose BAFA, a novel framework that integrates active learning with constrained empirical risk minimization. BAFA maintains a version space of surrogate models consistent with observed outcomes and actively selects the most informative samples for querying, enabling efficient estimation of uncertainty intervals for fairness metrics such as ΔAUC. Experiments on CivilComments and Bias-in-Bios demonstrate that BAFA achieves the same error tolerance with up to 40× fewer queries than stratified sampling—e.g., 144 versus 5,956 queries—while yielding lower estimation variance and more stable performance.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) exhibit systematic biases across demographic groups. Auditing is proposed as an accountability tool for black-box LLM applications, but suffers from resource-intensive query access. We conceptualise auditing as uncertainty estimation over a target fairness metric and introduce BAFA, the Bounded Active Fairness Auditor for query-efficient auditing of black-box LLMs. BAFA maintains a version space of surrogate models consistent with queried scores and computes uncertainty intervals for fairness metrics (e.g., $\Delta$ AUC) via constrained empirical risk minimisation. Active query selection narrows these intervals to reduce estimation error. We evaluate BAFA on two standard fairness dataset case studies: \textsc{CivilComments} and \textsc{Bias-in-Bios}, comparing against stratified sampling, power sampling, and ablations. BAFA achieves target error thresholds with up to 40$\times$ fewer queries than stratified sampling (e.g., 144 vs 5,956 queries at $\varepsilon=0.02$ for \textsc{CivilComments}) for tight thresholds, demonstrates substantially better performance over time, and shows lower variance across runs. These results suggest that active sampling can reduce resources needed for independent fairness auditing with LLMs, supporting continuous model evaluations.
Problem

Research questions and friction points this paper is trying to address.

fairness auditing
black-box LLMs
query efficiency
systematic bias
active sampling
Innovation

Methods, ideas, or system contributions that make the work stand out.

active fairness auditing
query-efficient
black-box LLMs
uncertainty estimation
constrained empirical risk minimisation
🔎 Similar Papers
No similar papers found.
David Hartmann
David Hartmann
Technische Universität Berlin
Algorithmic AuditingFairnessAccountabilityMachine LearningCritical Data Studies
L
Lena Pohlmann
Weizenbaum Institut Berlin, Technische Universität Berlin
L
Lelia Hanslik
Technische Universität Berlin
N
Noah Giessing
FIZ Karlsruhe
B
Bettina Berendt
Weizenbaum Institut Berlin, Technische Universität Berlin, KU Leuven
Pieter Delobelle
Pieter Delobelle
KU Leuven
machine learningNLPfairnessAI ethics