Visual-ERM: Reward Modeling for Visual Equivalence

📅 2026-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that existing reward signals in vision-to-code tasks struggle to capture fine-grained visual discrepancies and are susceptible to reward hacking. To overcome this, the authors propose Visual-ERM, a multimodal generative reward model that establishes a fine-grained, interpretable, and task-agnostic visual equivalence reward mechanism at the rendered image level. They also introduce VC-RewardBench, a new benchmark that transcends the limitations of conventional text-based rules or coarse-grained embeddings. Integrating reinforcement learning with a reflection-and-revision mechanism, Visual-ERM achieves performance gains of 8.4, 2.7, and 4.1 points on chart, table, and SVG parsing tasks, respectively. Notably, its 8B variant outperforms the 235B Qwen3-VL-Instruct on VC-RewardBench and approaches the performance of leading closed-source models.

Technology Category

Application Category

📝 Abstract
Vision-to-code tasks require models to reconstruct structured visual inputs, such as charts, tables, and SVGs, into executable or structured representations with high visual fidelity. While recent Large Vision Language Models (LVLMs) achieve strong results via supervised fine-tuning, reinforcement learning remains challenging due to misaligned reward signals. Existing rewards either rely on textual rules or coarse visual embedding similarity, both of which fail to capture fine-grained visual discrepancies and are vulnerable to reward hacking. We propose Visual Equivalence Reward Model (Visual-ERM), a multimodal generative reward model that provides fine-grained, interpretable, and task-agnostic feedback to evaluate vision-to-code quality directly in the rendered visual space. Integrated into RL, Visual-ERM improves Qwen3-VL-8B-Instruct by +8.4 on chart-to-code and yields consistent gains on table and SVG parsing (+2.7, +4.1 on average), and further strengthens test-time scaling via reflection and revision. We also introduce VisualCritic-RewardBench (VC-RewardBench), a benchmark for judging fine-grained image-to-image discrepancies on structured visual data, where Visual-ERM at 8B decisively outperforms Qwen3-VL-235B-Instruct and approaches leading closed-source models. Our results suggest that fine-grained visual reward supervision is both necessary and sufficient for vision-to-code RL, regardless of task specificity.
Problem

Research questions and friction points this paper is trying to address.

vision-to-code
reward modeling
visual equivalence
reinforcement learning
fine-grained visual discrepancy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Visual-ERM
reward modeling
vision-to-code
multimodal reinforcement learning
visual equivalence
🔎 Similar Papers
No similar papers found.