🤖 AI Summary
This study addresses the underexplored risk that post-training quantization, while reducing computational costs of large language models, can subtly induce socially biased behavior shifts that evade detection by conventional aggregate fairness metrics. The authors introduce PostTrainingBiasBench—a unified evaluation benchmark spanning 13 datasets—and conduct a large-scale analysis of 50 quantized models (4-bit and 8-bit), revealing and formally naming the phenomenon of “quantization-induced masked bias flipping.” They identify model uncertainty as a key driver of these bias dynamics and demonstrate that 21% of model responses exhibit bias flips after quantization, with 4-bit models showing 4–6 times greater behavioral shifts than 8-bit counterparts. Bias degradation varies asymmetrically across demographic groups, worsening by up to 18.6% and improving by up to 14.1%, with no consistent robustness advantage observed for larger models.
📝 Abstract
Post-training quantization reduces the computational cost of large language models but fundamentally alters their social biases in ways that aggregate metrics fail to capture. We present the first large-scale study of 50 quantized models evaluated on PostTrainingBiasBench, a unified benchmark of 13 closed- and open-ended bias datasets. We identify a phenomenon we term quantization-induced masked bias flipping, in which up to 21% of responses flip between biased and unbiased states after quantization, despite showing no change in aggregate bias scores. These flips are strongly driven by model uncertainty, where the responses with high uncertainty are 3-11x more likely to change than the confident ones. Quantization strength amplifies this effect, with 4-bit quantized models exhibiting 4-6x more behavioral changes than 8-bit quantized models. Critically, these changes create asymmetric impacts across demographic groups, where bias can worsen by up to 18.6% for some groups while improving by 14.1% for others, yielding misleadingly neutral aggregate outcomes. Larger models show no consistent robustness advantage, and group-specific shifts vary unpredictably across model families. Our findings demonstrate that compression fundamentally alters bias patterns, requiring crucial post-quantization evaluation and interventions to ensure reliability in practice.