Clarification as Supervision: Reinforcement Learning for Vision-Language Interfaces

📅 2025-09-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision-language models (VLMs) often generate image captions overly optimized for human readability, omitting visual details critical for mathematical reasoning and thereby causing downstream reasoning failures. To address this, we propose AC-RL, the first framework that models user clarification requests as implicit supervision signals to optimize VLM caption generation via interactive reinforcement learning—specifically targeting the retention of reasoning-sensitive visual information. AC-RL requires no manual annotations, integrating adaptive multi-turn clarification with policy-gradient-based updates to enhance caption completeness. Evaluated on seven visual mathematical reasoning benchmarks, AC-RL achieves an average accuracy improvement of 4.4 percentage points and reduces clarification requests by up to 39%, demonstrating the effectiveness of a reasoning-driven, interactive VLM optimization paradigm.

Technology Category

Application Category

📝 Abstract
Recent text-only models demonstrate remarkable mathematical reasoning capabilities. Extending these to visual domains requires vision-language models to translate images into text descriptions. However, current models, trained to produce captions for human readers, often omit the precise details that reasoning systems require. This creates an interface mismatch: reasoners often fail not due to reasoning limitations but because they lack access to critical visual information. We propose Adaptive-Clarification Reinforcement Learning (AC-RL), which teaches vision models what information reasoners need through interaction. Our key insight is that clarification requests during training reveal information gaps; by penalizing success that requires clarification, we create pressure for comprehensive initial captions that enable the reasoner to solve the problem in a single pass. AC-RL improves average accuracy by 4.4 points over pretrained baselines across seven visual mathematical reasoning benchmarks, and analysis shows it would cut clarification requests by up to 39% if those were allowed. By treating clarification as a form of implicit supervision, AC-RL demonstrates that vision-language interfaces can be effectively learned through interaction alone, without requiring explicit annotations.
Problem

Research questions and friction points this paper is trying to address.

Vision-language models omit visual details needed for reasoning
Interface mismatch causes reasoning failures due to missing information
Teaching vision models what information reasoners require through interaction
Innovation

Methods, ideas, or system contributions that make the work stand out.

AC-RL uses reinforcement learning for vision-language training
It penalizes success requiring clarification to improve captions
Method learns from interaction without explicit annotations
🔎 Similar Papers
No similar papers found.