🤖 AI Summary
Existing social bias evaluation methods are ill-suited for long-text generation tasks. Method: We introduce BBG, the first bilingual (English/Korean) benchmark for bias evaluation in long-text generation. Inspired by QA-style bias benchmarks (e.g., BBQ), we extend the paradigm to story continuation by designing bias-sensitive bilingual story prompts via prompt engineering; we then quantify model probabilities of generating neutral versus biased continuations and systematically compare generative versus discriminative (multiple-choice) evaluation outcomes. Contribution/Results: Experiments across 10 mainstream LLMs reveal a 37–62% inconsistency rate between generative and discriminative evaluations, exposing critical limitations of prevailing discriminative assessment approaches. BBG establishes a more realistic, task-aligned paradigm for LLM bias evaluation and provides the first empirical benchmark specifically designed for long-text generation scenarios.
📝 Abstract
Measuring social bias in large language models (LLMs) is crucial, but existing bias evaluation methods struggle to assess bias in long-form generation. We propose a Bias Benchmark for Generation (BBG), an adaptation of the Bias Benchmark for QA (BBQ), designed to evaluate social bias in long-form generation by having LLMs generate continuations of story prompts. Building our benchmark in English and Korean, we measure the probability of neutral and biased generations across ten LLMs. We also compare our long-form story generation evaluation results with multiple-choice BBQ evaluation, showing that the two approaches produce inconsistent results.