🤖 AI Summary
This work addresses the lack of challenging yet verifiable visual perception tasks for vision-language models (VLMs), which hinders reinforcement learning (RL) applications in this domain. We propose ViCrit: the first binary-verifiable RL agent task explicitly designed for visual perception. Our method synthesizes images with controlled, fine-grained visual hallucinations—spanning object identity, attributes, and spatial relations—and trains VLMs to precisely localize erroneous text spans via span-localization modeling and a binary exact-match reward mechanism. Contributions include: (1) the first vision-centric, verifiable RL paradigm for VLMs; (2) a fine-grained hallucination critique framework enabling cross-domain generalization—including abstract diagrams and visual mathematics; and (3) the release of ViCrit-Bench, a diagnostic benchmark. Experiments demonstrate that ViCrit significantly improves performance across multiple vision-language benchmarks and systematically enhances detection accuracy for diverse perceptual errors.
📝 Abstract
Reinforcement learning (RL) has shown great effectiveness for fine-tuning large language models (LLMs) using tasks that are challenging yet easily verifiable, such as math reasoning or code generation. However, extending this success to visual perception in vision-language models (VLMs) has been impeded by the scarcity of vision-centric tasks that are simultaneously challenging and unambiguously verifiable. To this end, we introduce ViCrit (Visual Caption Hallucination Critic), an RL proxy task that trains VLMs to localize a subtle, synthetic visual hallucination injected into paragraphs of human-written image captions. Starting from a 200-word captions, we inject a single, subtle visual description error-altering a few words on objects, attributes, counts, or spatial relations-and task the model to pinpoint the corrupted span given the image and the modified caption. This formulation preserves the full perceptual difficulty while providing a binary, exact-match reward that is easy to compute and unambiguous. Models trained with the ViCrit Task exhibit substantial gains across a variety of VL benchmarks. Crucially, the improvements transfer beyond natural-image training data to abstract image reasoning and visual math, showing promises of learning to perceive rather than barely memorizing seen objects. To facilitate evaluation, we further introduce ViCrit-Bench, a category-balanced diagnostic benchmark that systematically probes perception errors across diverse image domains and error types. Together, our results demonstrate that fine-grained hallucination criticism is an effective and generalizable objective for enhancing visual perception in VLMs.