LLMs Are Biased Towards Output Formats! Systematically Evaluating and Mitigating Output Format Bias of LLMs

📅 2024-08-16
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the “format bias” problem in large language models (LLMs), wherein model performance varies significantly across output formats (e.g., multiple-choice, wrapped, list, mapping)—a discrepancy we formally define and quantify for the first time, distinguishing between constraint adherence and intrinsic task performance. We propose a format-agnostic evaluation framework integrating format-aware assessment design, synthetic data augmentation, targeted prompt engineering, and fine-tuning strategies. Experiments demonstrate substantial mitigation of cross-format performance variance: ChatGPT’s performance variance on wrapped formats drops from 235.33%² to 0.71%². Our core contributions are (i) the first systematic metric suite for quantifying format bias, and (ii) empirical validation that improving format instruction-following capability—not merely task accuracy—is the key pathway to bias reduction.

Technology Category

Application Category

📝 Abstract
We present the first systematic evaluation examining format bias in performance of large language models (LLMs). Our approach distinguishes between two categories of an evaluation metric under format constraints to reliably and accurately assess performance: one measures performance when format constraints are adhered to, while the other evaluates performance regardless of constraint adherence. We then define a metric for measuring the format bias of LLMs and establish effective strategies to reduce it. Subsequently, we present our empirical format bias evaluation spanning four commonly used categories -- multiple-choice question-answer, wrapping, list, and mapping -- covering 15 widely-used formats. Our evaluation on eight generation tasks uncovers significant format bias across state-of-the-art LLMs. We further discover that improving the format-instruction following capabilities of LLMs across formats potentially reduces format bias. Based on our evaluation findings, we study prompting and fine-tuning with synthesized format data techniques to mitigate format bias. Our methods successfully reduce the variance in ChatGPT's performance among wrapping formats from 235.33 to 0.71 (%$^2$).
Problem

Research questions and friction points this paper is trying to address.

Evaluate format bias in LLMs
Define metric for format bias
Mitigate format bias effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematic evaluation of format bias
Metric for measuring LLMs format bias
Prompting and fine-tuning with synthesized data
🔎 Similar Papers
No similar papers found.