🤖 AI Summary
This work addresses the high query cost of auditing systemic group biases in black-box large language models (LLMs). To this end, the authors propose BAFA, a novel framework that integrates active learning with constrained empirical risk minimization. BAFA maintains a version space of surrogate models consistent with observed outcomes and actively selects the most informative samples for querying, enabling efficient estimation of uncertainty intervals for fairness metrics such as ΔAUC. Experiments on CivilComments and Bias-in-Bios demonstrate that BAFA achieves the same error tolerance with up to 40× fewer queries than stratified sampling—e.g., 144 versus 5,956 queries—while yielding lower estimation variance and more stable performance.
📝 Abstract
Large Language Models (LLMs) exhibit systematic biases across demographic groups. Auditing is proposed as an accountability tool for black-box LLM applications, but suffers from resource-intensive query access. We conceptualise auditing as uncertainty estimation over a target fairness metric and introduce BAFA, the Bounded Active Fairness Auditor for query-efficient auditing of black-box LLMs. BAFA maintains a version space of surrogate models consistent with queried scores and computes uncertainty intervals for fairness metrics (e.g., $\Delta$ AUC) via constrained empirical risk minimisation. Active query selection narrows these intervals to reduce estimation error. We evaluate BAFA on two standard fairness dataset case studies: \textsc{CivilComments} and \textsc{Bias-in-Bios}, comparing against stratified sampling, power sampling, and ablations. BAFA achieves target error thresholds with up to 40$\times$ fewer queries than stratified sampling (e.g., 144 vs 5,956 queries at $\varepsilon=0.02$ for \textsc{CivilComments}) for tight thresholds, demonstrates substantially better performance over time, and shows lower variance across runs. These results suggest that active sampling can reduce resources needed for independent fairness auditing with LLMs, supporting continuous model evaluations.