Social Bias Benchmark for Generation: A Comparison of Generation and QA-Based Evaluations

📅 2025-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing social bias evaluation methods are ill-suited for long-text generation tasks. Method: We introduce BBG, the first bilingual (English/Korean) benchmark for bias evaluation in long-text generation. Inspired by QA-style bias benchmarks (e.g., BBQ), we extend the paradigm to story continuation by designing bias-sensitive bilingual story prompts via prompt engineering; we then quantify model probabilities of generating neutral versus biased continuations and systematically compare generative versus discriminative (multiple-choice) evaluation outcomes. Contribution/Results: Experiments across 10 mainstream LLMs reveal a 37–62% inconsistency rate between generative and discriminative evaluations, exposing critical limitations of prevailing discriminative assessment approaches. BBG establishes a more realistic, task-aligned paradigm for LLM bias evaluation and provides the first empirical benchmark specifically designed for long-text generation scenarios.

Technology Category

Application Category

📝 Abstract
Measuring social bias in large language models (LLMs) is crucial, but existing bias evaluation methods struggle to assess bias in long-form generation. We propose a Bias Benchmark for Generation (BBG), an adaptation of the Bias Benchmark for QA (BBQ), designed to evaluate social bias in long-form generation by having LLMs generate continuations of story prompts. Building our benchmark in English and Korean, we measure the probability of neutral and biased generations across ten LLMs. We also compare our long-form story generation evaluation results with multiple-choice BBQ evaluation, showing that the two approaches produce inconsistent results.
Problem

Research questions and friction points this paper is trying to address.

Assess social bias in long-form generation by LLMs
Compare generation-based and QA-based bias evaluations
Measure bias in English and Korean LLM outputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adapted BBQ for long-form generation bias evaluation
Used story prompts to measure LLM social bias
Compared generation and QA-based bias evaluation results
🔎 Similar Papers
No similar papers found.