🤖 AI Summary
Existing vision-language models struggle with precise credit assignment in complex visual reasoning tasks—such as chart understanding—leading to unreliable multi-step reasoning. To address this, this work proposes SketchVL, a framework that constructs traceable multi-step reasoning trajectories by iteratively drawing intermediate reasoning markers on the image and feeding them back into the model. SketchVL introduces, for the first time, a fine-grained process reward mechanism (FinePRM) coupled with a novel reinforcement learning algorithm, FinePO, which enables credit assignment at the level of each individual drawing action. This approach substantially enhances the model’s controllability over complex reasoning pathways, achieving an average performance gain of 7.23% across chart understanding, natural image reasoning, and mathematical reasoning benchmarks.
📝 Abstract
Charts are high-density visual carriers of complex data and medium for information extraction and analysis. Due to the need for precise and complex visual reasoning, automated chart understanding poses a significant challenge to existing Multimodal Large Language Models (MLLMs). Many MLLMs trained with reinforcement learning (RL) face the challenge of credit assignment. Their advantage estimation, typically performed at the trajectory level, cannot distinguish between correct and incorrect reasoning steps within a single generated response. To address this limitation, we introduce SketchVL, a novel MLLM that optimized with FinePO, a new RL algorithm designed for fine-grained credit assignment within each trajectory. SketchVL's methodology involves drawing its intermediate reasoning steps as markers on the image and feeding the annotated image back to itself, creating a robust, multi-step reasoning process. During training, the FinePO algorithm leverages a Fine-grained Process Reward Model (FinePRM) to score each drawing action within a trajectory, thereby precisely assigning credit for each step. This mechanism allows FinePO to more strongly reward correct tokens when a trajectory is globally successful, and more heavily penalize incorrect tokens when the trajectory is globally suboptimal, thus achieving fine-grained reinforcement signals. Experiments show that SketchVL learns to align its step-level behavior with the FinePRM, achieving an average performance gain of 7.23\% over its base model across chart datasets, natural image datasets, and mathematics, providing a promising new direction for training powerful reasoning models.