🤖 AI Summary
Vision-language models (VLMs) often generate image captions overly optimized for human readability, omitting visual details critical for mathematical reasoning and thereby causing downstream reasoning failures. To address this, we propose AC-RL, the first framework that models user clarification requests as implicit supervision signals to optimize VLM caption generation via interactive reinforcement learning—specifically targeting the retention of reasoning-sensitive visual information. AC-RL requires no manual annotations, integrating adaptive multi-turn clarification with policy-gradient-based updates to enhance caption completeness. Evaluated on seven visual mathematical reasoning benchmarks, AC-RL achieves an average accuracy improvement of 4.4 percentage points and reduces clarification requests by up to 39%, demonstrating the effectiveness of a reasoning-driven, interactive VLM optimization paradigm.
📝 Abstract
Recent text-only models demonstrate remarkable mathematical reasoning capabilities. Extending these to visual domains requires vision-language models to translate images into text descriptions. However, current models, trained to produce captions for human readers, often omit the precise details that reasoning systems require. This creates an interface mismatch: reasoners often fail not due to reasoning limitations but because they lack access to critical visual information. We propose Adaptive-Clarification Reinforcement Learning (AC-RL), which teaches vision models what information reasoners need through interaction. Our key insight is that clarification requests during training reveal information gaps; by penalizing success that requires clarification, we create pressure for comprehensive initial captions that enable the reasoner to solve the problem in a single pass. AC-RL improves average accuracy by 4.4 points over pretrained baselines across seven visual mathematical reasoning benchmarks, and analysis shows it would cut clarification requests by up to 39% if those were allowed. By treating clarification as a form of implicit supervision, AC-RL demonstrates that vision-language interfaces can be effectively learned through interaction alone, without requiring explicit annotations.