Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional fairness metrics rely solely on prediction accuracy, overlooking implicit decision biases arising from model uncertainty—particularly inter-group disparities in confidence scores. Method: We propose UCerF, the first uncertainty-aware fairness evaluation metric for large language models (LLMs), integrating group-wise confidence statistics and probabilistic calibration into fairness quantification to expose latent biases—such as high-confidence erroneous predictions—that metrics like Equalized Odds neglect. To support this, we construct the first fine-grained, LLM-optimized gender-occupation coreference resolution benchmark (31,756 instances) with controllable bias injection. Results: Empirical evaluation across 10 mainstream open-source LLMs demonstrates that UCerF significantly enhances sensitivity to and interpretability of intrinsic decision biases—for instance, revealing covert unfairness in Mistral-7B stemming from overconfident incorrect predictions.

Technology Category

Application Category

📝 Abstract
The recent rapid adoption of large language models (LLMs) highlights the critical need for benchmarking their fairness. Conventional fairness metrics, which focus on discrete accuracy-based evaluations (i.e., prediction correctness), fail to capture the implicit impact of model uncertainty (e.g., higher model confidence about one group over another despite similar accuracy). To address this limitation, we propose an uncertainty-aware fairness metric, UCerF, to enable a fine-grained evaluation of model fairness that is more reflective of the internal bias in model decisions compared to conventional fairness measures. Furthermore, observing data size, diversity, and clarity issues in current datasets, we introduce a new gender-occupation fairness evaluation dataset with 31,756 samples for co-reference resolution, offering a more diverse and suitable dataset for evaluating modern LLMs. We establish a benchmark, using our metric and dataset, and apply it to evaluate the behavior of ten open-source LLMs. For example, Mistral-7B exhibits suboptimal fairness due to high confidence in incorrect predictions, a detail overlooked by Equalized Odds but captured by UCerF. Overall, our proposed LLM benchmark, which evaluates fairness with uncertainty awareness, paves the way for developing more transparent and accountable AI systems.
Problem

Research questions and friction points this paper is trying to address.

Assessing fairness in LLMs with uncertainty-aware metrics
Addressing dataset limitations for gender-occupation fairness evaluation
Benchmarking LLM fairness using UCerF and a new diverse dataset
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes UCerF for uncertainty-aware fairness evaluation
Introduces diverse gender-occupation fairness dataset
Benchmarks ten LLMs using new metric and dataset
🔎 Similar Papers
No similar papers found.
Yinong Oliver Wang
Yinong Oliver Wang
Carnegie Mellon University
Computer VisionResponsible AI
N
N. Sivakumar
Apple Inc., Cupertino, CA, US
Falaah Arif Khan
Falaah Arif Khan
New York University
Machine LearningData ScienceAlgorithmic Fairness
R
Rin Metcalf Susa
Apple Inc., Cupertino, CA, US
Adam Golinski
Adam Golinski
Apple Inc., Cupertino, CA, US
Natalie Mackraz
Natalie Mackraz
ML Engineer, Apple
B
B. Theobald
Apple Inc., Cupertino, CA, US
L
L. Zappella
Apple Inc., Cupertino, CA, US
N
N. Apostoloff
Apple Inc., Cupertino, CA, US