🤖 AI Summary
Aligned language models (LLMs) persistently exhibit implicit biases against gender-diverse identities (e.g., transgender, non-binary), inherited and amplified by dominant alignment methods—particularly Direct Preference Optimization (DPO). Method: We conduct the first systematic evaluation of 16 DPO-aligned models across gender-diverse identities, propose a transferable framework for measuring implicit reward-signal bias, and develop a community-driven, multidimensional evaluation protocol. Results: DPO models are highly sensitive to biases present in the Supervised Fine-Tuning (SFT) stage, significantly exacerbating stigmatizing and non-affirming language. Existing benchmarks (e.g., BOLD, CrowS-Pairs) entirely overlook bias patterns affecting marginalized gender identities. Based on these findings, we introduce practical guidelines for evaluating LLM alignment specifically with respect to minoritized gender identities, advocating for fairer, more inclusive alignment paradigms grounded in intersectional equity.
📝 Abstract
Natural-language assistants are designed to provide users with helpful responses while avoiding harmful outputs, largely achieved through alignment to human preferences. Yet there is limited understanding of whether alignment techniques may inadvertently perpetuate or even amplify harmful biases inherited from their pre-aligned base models. This issue is compounded by the choice of bias evaluation benchmarks in popular preference-finetuned models, which predominantly focus on dominant social categories, such as binary gender, thereby limiting insights into biases affecting underrepresented groups. Towards addressing this gap, we center transgender, nonbinary, and other gender-diverse identities to investigate how alignment procedures interact with pre-existing gender-diverse bias in LLMs. Our key contributions include: 1) a comprehensive survey of bias evaluation modalities across leading preference-finetuned LLMs, highlighting critical gaps in gender-diverse representation, 2) systematic evaluation of gender-diverse biases across 16 models spanning Direct Preference Optimization (DPO) stages, uncovering harms popular bias benchmarks fail to detect, and 3) a flexible framework for measuring harmful biases in implicit reward signals applicable to other social contexts. Our findings reveal that DPO-aligned models are particularly sensitive to supervised finetuning (SFT), and can amplify two forms of real-world gender-diverse harms from their base models: stigmatization and gender non-affirmative language. We conclude with recommendations tailored to DPO and broader alignment practices, advocating for the adoption of community-informed bias evaluation frameworks to more effectively identify and address underrepresented harms in LLMs.