How Quantization Shapes Bias in Large Language Models

📅 2025-08-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how weight and activation quantization affect demographic subgroup bias in large language models (LLMs). We evaluate nine benchmarks spanning stereotypes, toxicity, sentiment, and fairness, using both probabilistic and generative-text-based metrics across diverse model architectures and reasoning capabilities. Results show that quantization generally reduces toxicity and has negligible impact on sentiment, but aggressive compression slightly exacerbates stereotype bias and inter-group unfairness—consistent across models and demographic subgroups. Crucially, this work is the first to uncover the differential mechanisms by which quantization influences distinct bias dimensions. Our findings provide empirical evidence and actionable design insights for reconciling model compression with ethical alignment, highlighting trade-offs between efficiency and fairness that must be explicitly addressed in quantization-aware development pipelines.

Technology Category

Application Category

📝 Abstract
This work presents a comprehensive evaluation of how quantization affects model bias, with particular attention to its impact on individual demographic subgroups. We focus on weight and activation quantization strategies and examine their effects across a broad range of bias types, including stereotypes, toxicity, sentiment, and fairness. We employ both probabilistic and generated text-based metrics across nine benchmarks and evaluate models varying in architecture family and reasoning ability. Our findings show that quantization has a nuanced impact on bias: while it can reduce model toxicity and does not significantly impact sentiment, it tends to slightly increase stereotypes and unfairness in generative tasks, especially under aggressive compression. These trends are generally consistent across demographic categories and model types, although their magnitude depends on the specific setting. Overall, our results highlight the importance of carefully balancing efficiency and ethical considerations when applying quantization in practice.
Problem

Research questions and friction points this paper is trying to address.

Evaluates how quantization affects bias in language models
Examines quantization impact on stereotypes, toxicity, and fairness
Analyzes bias changes across demographic subgroups and compression levels
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates quantization effects on model bias
Uses probabilistic and text-based metrics
Tests weight and activation quantization strategies
🔎 Similar Papers
No similar papers found.