From Lists to Emojis: How Format Bias Affects Model Alignment

📅 2024-09-18
🏛️ arXiv.org
📈 Citations: 11
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a previously underexplored systemic bias in RLHF preference models—including human annotators, GPT-4, and top-performing RewardBench models—toward non-length-related formatting features (e.g., bullet points, emojis, bolding, hyperlinks). Through preference modeling analysis, controlled formatting perturbation experiments, and rigorous evaluation across multiple benchmarks (RewardBench, AlpacaEval, LMSYS), the study quantifies the magnitude and generalizability of these formatting biases for the first time. Results show that as little as <1% contaminated samples suffices to corrupt reward model calibration; moreover, alignment techniques such as best-of-n sampling and online DPO can readily exploit such biases to inflate apparent performance. The findings underscore the critical need for format–content decoupling in reward modeling and provide principled guidelines for designing robust, fair alignment algorithms and evaluation protocols.

Technology Category

Application Category

📝 Abstract
In this paper, we study format biases in reinforcement learning from human feedback (RLHF). We observe that many widely-used preference models, including human evaluators, GPT-4, and top-ranking models on the RewardBench benchmark, exhibit strong biases towards specific format patterns, such as lists, links, bold text, and emojis. Furthermore, large language models (LLMs) can exploit these biases to achieve higher rankings on popular benchmarks like AlpacaEval and LMSYS Chatbot Arena. One notable example of this is verbosity bias, where current preference models favor longer responses that appear more comprehensive, even when their quality is equal to or lower than shorter, competing responses. However, format biases beyond verbosity remain largely underexplored in the literature. In this work, we extend the study of biases in preference learning beyond the commonly recognized length bias, offering a comprehensive analysis of a wider range of format biases. Additionally, we show that with a small amount of biased data (less than 1%), we can inject significant bias into the reward model. Moreover, these format biases can also be easily exploited by downstream alignment algorithms, such as best-of-n sampling and online iterative DPO, as it is usually easier to manipulate the format than to improve the quality of responses. Our findings emphasize the need to disentangle format and content both for designing alignment algorithms and evaluating models.
Problem

Research questions and friction points this paper is trying to address.

Study format biases in RLHF affecting model alignment
Explore biases beyond verbosity in preference learning
Investigate exploitation of format biases by LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Studies format biases in RLHF models
Injects bias with minimal biased data
Exploits biases via alignment algorithms
🔎 Similar Papers
No similar papers found.
X
Xuanchang Zhang
University of Illinois Urbana-Champaign
W
Wei Xiong
University of Illinois Urbana-Champaign
Lichang Chen
Lichang Chen
University of Maryland
AI AlignmentOmni-ModalityReasoning
T
Tianyi Zhou
University of Maryland, College Park
Heng Huang
Heng Huang
Brendan Iribe Endowed Professor in Computer Science, University Maryland College Park
Machine LearningAIBiomedical Data ScienceComputer Vision
T
Tong Zhang
University of Illinois Urbana-Champaign