AbstentionBench: Reasoning LLMs Fail on Unanswerable Questions

📅 2025-06-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models (LLMs) lack reliable abstention capability—i.e., the ability to deliberately withhold answers to unanswerable questions (e.g., those with false premises, unknown answers, or semantic ambiguity)—and no systematic evaluation framework exists. Method: We introduce AbstentionBench, the first large-scale benchmark for abstention evaluation, covering five categories of unanswerable questions and assessing 20 state-of-the-art LLMs across 20 heterogeneous datasets. Contribution/Results: Key findings reveal that reasoning-oriented fine-tuning degrades abstention performance by 24% on average, exposing a fundamental deficiency in LLMs’ uncertainty reasoning. Neither scaling model size nor applying sophisticated system prompts yields significant improvement. Our work establishes the first multi-dimensional, empirically grounded abstention evaluation paradigm—providing a foundational benchmark, actionable insights, and methodological scaffolding for developing trustworthy LLMs.

Technology Category

Application Category

📝 Abstract
For Large Language Models (LLMs) to be reliably deployed in both everyday and high-stakes domains, knowing when not to answer is equally critical as answering correctly. Real-world user queries, which can be underspecified, ill-posed, or fundamentally unanswerable, require LLMs to reason about uncertainty and selectively abstain -- i.e., refuse to answer definitively. However, abstention remains understudied, without a systematic evaluation framework for modern LLMs. In this work, we introduce AbstentionBench, a large-scale benchmark for holistically evaluating abstention across 20 diverse datasets, including questions with unknown answers, underspecification, false premises, subjective interpretations, and outdated information. Evaluating 20 frontier LLMs reveals abstention is an unsolved problem, and one where scaling models is of little use. While recent reasoning LLMs have shown impressive results in complex problem solving, surprisingly, we find that reasoning fine-tuning degrades abstention (by $24%$ on average), even for math and science domains on which reasoning models are explicitly trained. We find that while a carefully crafted system prompt can boost abstention in practice, it does not resolve models' fundamental inability to reason about uncertainty. We release AbstentionBench to foster research into advancing LLM reliability.
Problem

Research questions and friction points this paper is trying to address.

LLMs struggle with abstaining on unanswerable questions
No systematic framework evaluates LLM abstention capabilities
Reasoning fine-tuning worsens abstention despite improving problem-solving
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces AbstentionBench for evaluating LLM abstention
Shows reasoning fine-tuning degrades abstention performance
Reveals system prompts don't fix uncertainty reasoning
🔎 Similar Papers
No similar papers found.