Large Language Models Still Exhibit Bias in Long Text

📅 2024-10-23
🏛️ Annual Meeting of the Association for Computational Linguistics
📈 Citations: 9
Influential: 0
📄 PDF
🤖 AI Summary
Existing fairness benchmarks predominantly focus on simple tasks, failing to expose societal biases in long-text generation by large language models (LLMs). To address this gap, we propose LTF-TEST—the first fine-grained fairness evaluation framework specifically designed for long-text generation—covering 14 topics and 10 demographic dimensions, with 11,948 essay-style prompts. Our method jointly analyzes generated content and reasoning paths, revealing a dual bias pattern: “advantaged-group preference” and “disadvantaged-group overprotection.” We further introduce FT-REGARD, a supervised fine-tuning strategy leveraging paired bias-aware prompts and neutral reference responses. Evaluated on five mainstream models—including GPT-4o and LLaMA3—FT-REGARD reduces gender bias by 34.6% and improves accuracy on the BBQ benchmark by 1.4 percentage points, demonstrating substantial gains in fairness without compromising utility.

Technology Category

Application Category

📝 Abstract
Existing fairness benchmarks for large language models (LLMs) primarily focus on simple tasks, such as multiple-choice questions, overlooking biases that may arise in more complex scenarios like long-text generation. To address this gap, we introduce the Long Text Fairness Test (LTF-TEST), a framework that evaluates biases in LLMs through essay-style prompts. LTF-TEST covers 14 topics and 10 demographic axes, including gender and race, resulting in 11,948 samples. By assessing both model responses and the reasoning behind them, LTF-TEST uncovers subtle biases that are difficult to detect in simple responses. In our evaluation of five recent LLMs, including GPT-4o and LLaMa3, we identify two key patterns of bias. First, these models frequently favor certain demographic groups in their responses. Second, they show excessive sensitivity toward traditionally disadvantaged groups, often providing overly protective responses while neglecting others. To mitigate these biases, we propose FT-REGARD, a finetuning approach that pairs biased prompts with neutral responses. FT-REGARD reduces gender bias by 34.6% and improves performance by 1.4 percentage points on the BBQ benchmark, offering a promising approach to addressing biases in long-text generation tasks.
Problem

Research questions and friction points this paper is trying to address.

Evaluates biases in LLMs for long-text generation scenarios
Identifies demographic favoritism and excessive sensitivity in model responses
Proposes finetuning method to reduce bias in long-text outputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduced LTF-TEST for long-text bias evaluation
Proposed FT-REGARD finetuning to reduce biases
Assessed biases across 14 topics and demographics