When Silence Is Golden: Can LLMs Learn to Abstain in Temporal QA and Beyond?

πŸ“… 2026-02-04
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the tendency of large language models to generate fluent yet incorrect answers in temporal question answering due to neglecting time-sensitive evidence and lacking the ability to abstain from answering unanswerable questions. The authors propose the first approach that explicitly models answer refusal as a trainable skill, integrating chain-of-thought supervision with refusal-aware reinforcement learning to jointly optimize reasoning and abstention behavior. Empirical results demonstrate that supervised fine-tuning often leads to overconfidence, whereas reinforcement learning substantially improves the reliability of refusal decisions. Evaluated on TimeQA-Easy and TimeQA-Hard, their model based on Qwen2.5-1.5B-Instruct outperforms GPT-4o by 3.46% and 5.80% in Exact Match, respectively, and achieves a 20% improvement in true positive rate for unanswerable questions.

Technology Category

Application Category

πŸ“ Abstract
Large language models (LLMs) rarely admit uncertainty, often producing fluent but misleading answers, rather than abstaining (i.e., refusing to answer). This weakness is even evident in temporal question answering, where models frequently ignore time-sensitive evidence and conflate facts across different time-periods. In this paper, we present the first empirical study of training LLMs with an abstention ability while reasoning about temporal QA. Existing approaches such as calibration might be unreliable in capturing uncertainty in complex reasoning. We instead frame abstention as a teachable skill and introduce a pipeline that couples Chain-of-Thought (CoT) supervision with Reinforcement Learning (RL) guided by abstention-aware rewards. Our goal is to systematically analyze how different information types and training techniques affect temporal reasoning with abstention behavior in LLMs. Through extensive experiments studying various methods, we find that RL yields strong empirical gains on reasoning: a model initialized by Qwen2.5-1.5B-Instruct surpasses GPT-4o by $3.46\%$ and $5.80\%$ in Exact Match on TimeQA-Easy and Hard, respectively. Moreover, it improves the True Positive rate on unanswerable questions by $20\%$ over a pure supervised fine-tuned (SFT) variant. Beyond performance, our analysis shows that SFT induces overconfidence and harms reliability, while RL improves prediction accuracy but exhibits similar risks. Finally, by comparing implicit reasoning cues (e.g., original context, temporal sub-context, knowledge graphs) with explicit CoT supervision, we find that implicit information provides limited benefit for reasoning with abstention. Our study provides new insights into how abstention and reasoning can be jointly optimized, providing a foundation for building more reliable LLMs.
Problem

Research questions and friction points this paper is trying to address.

abstention
temporal question answering
large language models
uncertainty
reliability
Innovation

Methods, ideas, or system contributions that make the work stand out.

abstention
temporal question answering
reinforcement learning
Chain-of-Thought
large language models
πŸ”Ž Similar Papers
No similar papers found.