SynthRL: Scaling Visual Reasoning with Verifiable Data Synthesis

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision-language models (VLMs) struggle to simultaneously achieve deep reasoning, strong generalization, and computational efficiency in visual mathematical reasoning. Method: This paper proposes RLVR—a novel end-to-end framework for verifiable synthetic data generation and reinforcement learning–based training. RLVR introduces a three-stage guarantee mechanism: distribution-aware seed selection, answer-preserving challenging augmentation, and formal-verification–guided dual evaluation of correctness and difficulty. Integrated with semantic-constraint enhancement and verifiable reward modeling, RLVR automatically generates over 3.3K high-quality, challenging problems on MMK12. Results: Fine-tuning VLMs with RLVR-generated data yields significant improvements over state-of-the-art methods across five cross-domain visual mathematical reasoning benchmarks—especially on the most difficult samples—demonstrating a strong coupling between synthetic data quality and enhanced complex reasoning capability.

Technology Category

Application Category

📝 Abstract
Vision-language models (VLMs) trained via reinforcement learning with verifiable reward (RLVR) have shown notable progress in scaling test-time compute effectively. In this work, we investigate how synthesized RL data can further improve RLVR. To this end, we propose extbf{SynthRL}-a scalable and guaranteed pipeline for automatic data scaling in reasoning-oriented RL training. SynthRL comprises three key stages: (1) selecting seed questions with appropriate distribution, (2) augmenting them into more challenging variants while preserving the original answers, and (3) a guaranteed verification stage that ensures near-perfect correctness and difficulty enhancement. Our empirical experiments demonstrate SynthRL's scalability and effectiveness. When applied to the MMK12 dataset, SynthRL synthesizes over 3.3K additional verifiable, challenging questions from approximately 8K seed samples. Models trained with our synthesized data achieve consistent gains across five out-of-domain visual math reasoning benchmarks, with a significant improvement over baseline models trained on seed data alone. Notably, detailed analysis reveals that the gains are more pronounced on the most challenging evaluation samples, highlighting SynthRL's effectiveness in eliciting deeper and more complex reasoning patterns.
Problem

Research questions and friction points this paper is trying to address.

Scaling visual reasoning with verifiable synthetic data
Improving RLVR training via automated data augmentation
Enhancing model performance on complex reasoning benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses verifiable reward reinforcement learning
Automates scalable data synthesis pipeline
Enhances question difficulty while preserving answers
🔎 Similar Papers
No similar papers found.
Z
Zijian Wu
National University of Singapore
Jinjie Ni
Jinjie Ni
National University of Singapore
Foundation ModelsLarge Language ModelsArtificial Intelligence
Xiangyan Liu
Xiangyan Liu
National University of Singapore
AILarge Language Models
Z
Zichen Liu
National University of Singapore
H
Hang Yan
The Chinese University of Hong Kong
M
Michael Shieh
National University of Singapore