🤖 AI Summary
Addressing key challenges in multimodal large language model (MLLM) reward modeling—including high annotation costs, coarse-grained single-step rewards, and the absence of dedicated evaluation benchmarks—this paper proposes SVIP, a novel framework for step-wise, vision-program-guided reward modeling. SVIP introduces the first chain-of-thought (CoT) reward modeling paradigm grounded in executable visual programs: it automatically generates task-specific vision code and parses its execution trace to construct fine-grained, multi-dimensional step-level reward signals. To capture complex reward dependencies across modalities, reasoning steps, and reward dimensions, SVIP designs TriAtt-CoT, a triple-attention mechanism integrating cross-modal, inter-step, and intra-dimension modeling. Furthermore, SVIP establishes the first dedicated benchmark for evaluating multimodal CoT reward models. Experiments demonstrate that SVIP significantly improves MLLM training stability and inference consistency, reduces hallucination rates, and achieves state-of-the-art performance across multiple benchmarks.
📝 Abstract
Recent advancements in reward signal usage for Large Language Models (LLMs) are remarkable. However, significant challenges exist when transitioning reward signal to the multimodal domain, including labor-intensive annotations, over-reliance on one-step rewards, and inadequate evaluation. To address these issues, we propose SVIP, a novel approach to train a step-level multi-dimensional Chain-of-Thought~(CoT) reward model automatically. It generates code for solving visual tasks and transforms the analysis of code blocks into the evaluation of CoT step as training samples. Then, we train SVIP-Reward model using a multi-head attention mechanism called TriAtt-CoT. The advantages of SVIP-Reward are evident throughout the entire process of MLLM. We also introduce a benchmark for CoT reward model training and testing. Experimental results demonstrate that SVIP-Reward improves MLLM performance across training and inference-time scaling, yielding better results on benchmarks while reducing hallucinations and enhancing reasoning ability.