🤖 AI Summary
Are existing bias evaluation benchmarks reliable? This paper systematically investigates the robustness of mainstream large language model (LLM) bias evaluation methods—including BOLD, CrowS-Pairs, and StereoSet—through cross-benchmark assessment and ranking consistency analysis using Kendall’s τ and Spearman’s ρ. Our first empirical finding reveals severe inconsistency in model rankings across benchmarks, with an average Kendall’s τ of only 0.32. This indicates that current benchmarks lack cross-method comparability, rendering individual evaluation scores insufficient for reliable safety-oriented model comparison. Crucially, we identify the evaluation methodologies themselves as a systematic source of bias, challenging the validity and reliability of prevailing benchmarks. Based on these findings, we propose community-wide evaluation guidelines advocating a paradigm shift—from “method-centric” assessment toward “robustness-first” bias evaluation—thereby advancing methodological rigor and interpretability in LLM fairness research.
📝 Abstract
The creation of benchmarks to evaluate the safety of Large Language Models is one of the key activities within the trusted AI community. These benchmarks allow models to be compared for different aspects of safety such as toxicity, bias, harmful behavior etc. Independent benchmarks adopt different approaches with distinct data sets and evaluation methods. We investigate how robust such benchmarks are by using different approaches to rank a set of representative models for bias and compare how similar are the overall rankings. We show that different but widely used bias evaluations methods result in disparate model rankings. We conclude with recommendations for the community in the usage of such benchmarks.