Detecting (Un)answerability in Large Language Models with Linear Directions

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) frequently generate hallucinated answers under information scarcity, necessitating reliable abstention detection mechanisms. This paper addresses the answerability classification task in extractive question answering. We propose an activation-based abstention detection method that identifies interpretable, linear directions in the hidden-layer activation space—learned via activation additions—which generalize across tasks and datasets. These directions are used to compute projection scores for binary answerability classification and further enable causal intervention analysis. Evaluated on two open-source LLMs (Llama-2 and Qwen) and four mainstream QA benchmarks, our method significantly outperforms existing prompt-engineering and supervised classifier approaches. It demonstrates strong robustness and generalization to diverse unanswerable cases, including factual gaps, ambiguous coreference, and logical contradictions.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) often respond confidently to questions even when they lack the necessary information, leading to hallucinated answers. In this work, we study the problem of (un)answerability detection, focusing on extractive question answering (QA) where the model should determine if a passage contains sufficient information to answer a given question. We propose a simple approach for identifying a direction in the model's activation space that captures unanswerability and uses it for classification. This direction is selected by applying activation additions during inference and measuring their impact on the model's abstention behavior. We show that projecting hidden activations onto this direction yields a reliable score for (un)answerability classification. Experiments on two open-weight LLMs and four extractive QA benchmarks show that our method effectively detects unanswerable questions and generalizes better across datasets than existing prompt-based and classifier-based approaches. Moreover, the obtained directions extend beyond extractive QA to unanswerability that stems from factors, such as lack of scientific consensus and subjectivity. Last, causal interventions show that adding or ablating the directions effectively controls the abstention behavior of the model.
Problem

Research questions and friction points this paper is trying to address.

Detecting unanswerable questions in large language models
Identifying linear directions in activation space for classification
Improving abstention behavior across diverse question types
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses linear directions in activation space
Applies activation additions during inference
Projects hidden activations for classification score
🔎 Similar Papers
No similar papers found.