Clarify or Answer: Reinforcement Learning for Agentic VQA with Context Under-specification

📅 2026-01-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of ambiguous questions in real-world visual question answering (VQA), where missing contextual information often leads models to produce highly confident yet incorrect answers. To mitigate this, the authors propose CoA, an agent that decouples clarification decision-making from clarification question generation: it first determines whether clarification is needed and, if so, generates focused, grammatically correct, and disambiguating questions to solicit external feedback before producing the final answer. The study introduces the CONTEXTCLARIFY dataset and a comparative benchmark, along with GRPO-CR—a reinforcement learning method leveraging multiple reward signals to optimize clarification generation. Evaluated across three vision-language models and three datasets, the approach achieves an average absolute improvement of 15.3 percentage points (83% relative gain) in end-to-end VQA accuracy.

Technology Category

Application Category

📝 Abstract
Real-world visual question answering (VQA) is often context-dependent: an image-question pair may be under-specified, such that the correct answer depends on external information that is not observable in the image. In such cases, directly answering can lead to confident but incorrect predictions. We propose CoA(Clarify-or-Answer), an ask-or-answer agent that separately models the decision to ask or answer, and what to ask if needed. CoA first determines whether clarification is necessary; if so, it asks a single focused question and then incorporates the response to produce the final answer. We introduce CONTEXTCLARIFY with a set of ambiguous VQA questions and the contrast set that is non-ambiguous. We further introduce GRPO-CR (Clarification Reasoning), a reinforcement learning approach that optimizes clarification question generation with multiple reward signals encouraging well-formed, focused, non-trivial questions that resolve ambiguity. Across three VLLMs and three datasets, CoA achieves consistent improvements at both the module and system levels, improving end-to-end VQA accuracy by an average of +15.3 points (83%) over prompting-based baselines
Problem

Research questions and friction points this paper is trying to address.

Visual Question Answering
Context Under-specification
Ambiguity
Clarification
Reinforcement Learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Clarify-or-Answer
Context Under-specification
Reinforcement Learning
Visual Question Answering
Clarification Reasoning
🔎 Similar Papers
No similar papers found.