Identifying and Answering Questions with False Assumptions: An Interpretable Approach

📅 2025-08-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the hallucination problem exhibited by large language models (LLMs) when answering questions containing false premises. Methodologically, it decomposes each input question into multiple verifiable atomic hypotheses and employs an external evidence retrieval module coupled with a fact-checking component to independently validate each hypothesis—thereby detecting and localizing erroneous premises. The key contribution is the first formulation of question answering as an atomic hypothesis verification task, enabling end-to-end interpretable verification. Experiments across five state-of-the-art LLMs demonstrate that the framework significantly reduces hallucination rates (average reduction of 32.7%), improves answer accuracy and premise identification precision, and provides transparent, stepwise attribution paths. This enhances both user trust and model debuggability through explicit, traceable reasoning.

Technology Category

Application Category

📝 Abstract
People often ask questions with false assumptions, a type of question that does not have regular answers. Answering such questions require first identifying the false assumptions. Large Language Models (LLMs) often generate misleading answers because of hallucinations. In this paper, we focus on identifying and answering questions with false assumptions in several domains. We first investigate to reduce the problem to fact verification. Then, we present an approach leveraging external evidence to mitigate hallucinations. Experiments with five LLMs demonstrate that (1) incorporating retrieved evidence is beneficial and (2) generating and validating atomic assumptions yields more improvements and provides an interpretable answer by specifying the false assumptions.
Problem

Research questions and friction points this paper is trying to address.

Identifying false assumptions in questions
Mitigating hallucinations in LLM responses
Providing interpretable answers with evidence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leveraging external evidence to mitigate hallucinations
Reducing problem to fact verification process
Generating and validating atomic assumptions interpretably
🔎 Similar Papers
No similar papers found.