๐ค AI Summary
Existing reinforcement learning-based verification reward (RLVR) methods for vision-language models (VLMs) evaluate only the final textual output, neglecting verification during the visual perception stageโleading to visual hallucinations and reward hacking. To address this, we propose PEARL, a dual-branch collaborative framework that introduces, for the first time, a **perceptual evidence anchoring mechanism**: it constructs verifiable perceptual checkpoints via a curated checklist of visual subproblems and employs perceptual rewards as fidelity gates to jointly optimize perception and reasoning. Built upon RL frameworks such as GRPO and DAPO, PEARL leverages auxiliary rollouts to generate perceptual rewards, enabling multi-step perceptual validation. On benchmarks including MathVerse, PEARL achieves a 9.7% absolute improvement over standard baselines and a 6.6% gain over GRPO, significantly enhancing the reliability and accuracy of multimodal reasoning.
๐ Abstract
Reinforcement Learning with Verifiable Rewards (RLVR) has significantly advanced the reasoning capabilities of Large Language Models (LLMs) and is now being applied to Vision-Language Models (VLMs). However, vanilla RLVR for VLMs verifies only the final textual output, critically neglecting the foundational step of visual perception. This oversight leads to visual hallucinations and reward hacking, as reasoning built upon flawed perception is inherently unreliable. To address this, we propose PEARL (Perceptual-Evidence Anchored Reinforced Learning), a dual-branch, perception-reasoning synergistic that strengthens multimodal reasoning by explicitly anchoring it to verified visual evidence. For each reasoning-oriented QA instance, PEARL first derive a perception checklist -- a set of perception-oriented sub-questions with verifiable answers that probe the model's understanding of key visual evidence. During training, auxiliary rollouts on this checklist yield a perceptual reward that both directly reinforces the model's perception ability and acts as a fidelity gate for reasoning. If the model passes the perception check, its policy update is biased towards evidence-anchored reasoning. Otherwise, the process is halted to prevent reasoning from flawed premises. PEARL can be seamlessly integrated with popular RL methods like GRPO and DAPO. Comprehensive experiments show PEARL achieves substantial gains on multimodal reasoning benchmarks, e.g., a +9.7% improvement over the baseline and +6.6% over GRPO on MathVerse.