Syn-GRPO: Self-Evolving Data Synthesis for MLLM Perception Reasoning

πŸ“… 2025-11-24
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing reinforcement learning (RL) methods for enhancing multimodal large language models’ (MLLMs) visual perception suffer from insufficient training data diversity, leading to limited exploration and response homogeneity. To address this, we propose Syn-GRPOβ€”a closed-loop self-evolving framework that decouples asynchronous data synthesis by jointly leveraging image generation models and the GRPO RL framework to online generate high-diversity, high-quality response samples. Crucially, we introduce a novel diversity reward that explicitly guides the model to self-evolve and iteratively improve data quality. Evaluated on three visual perception tasks, Syn-GRPO achieves significant gains in response diversity (+23.6%) and task performance (+5.8% average improvement). It is the first method to realize co-evolution of data quality and model capability, establishing a new paradigm for long-horizon self-evolving RL in MLLMs.

Technology Category

Application Category

πŸ“ Abstract
RL (reinforcement learning) methods (e.g., GRPO) for MLLM (Multimodal LLM) perception ability has attracted wide research interest owing to its remarkable generalization ability. Nevertheless, existing reinforcement learning methods still face the problem of low data quality, where data samples cannot elicit diverse responses from MLLMs, thus restricting the exploration scope for MLLM reinforcement learning. Some methods attempt to mitigate this problem by imposing constraints on entropy, but none address it at its root. Therefore, to tackle this problem, this work proposes Syn-GRPO (Synthesis-GRPO), which employs an online data generator to synthesize high-quality training data with diverse responses in GRPO training. Specifically, Syn-GRPO consists of two components: (1) data server; (2) GRPO workflow. The data server synthesizes new samples from existing ones using an image generation model, featuring a decoupled and asynchronous scheme to achieve high generation efficiency. The GRPO workflow provides the data server with the new image descriptions, and it leverages a diversity reward to supervise the MLLM to predict image descriptions for synthesizing samples with diverse responses. Experiment results across three visual perception tasks demonstrate that Syn-GRPO improves the data quality by a large margin, achieving significant superior performance to existing MLLM perception methods, and Syn-GRPO presents promising potential for scaling long-term self-evolving RL. Our code is available at https://github.com/hqhQAQ/Syn-GRPO.
Problem

Research questions and friction points this paper is trying to address.

Low quality data limits MLLM reinforcement learning exploration scope
Existing methods fail to generate diverse MLLM responses effectively
Current RL approaches cannot synthesize high-quality training data efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online data generator synthesizes high-quality training data
Decoupled asynchronous data server for efficient image generation
Diversity reward supervises MLLM for diverse response synthesis
πŸ”Ž Similar Papers
No similar papers found.
Qihan Huang
Qihan Huang
PhD Student, Zhejiang University
H
Haofei Zhang
Zhejiang University, Manycore Tech Inc.
R
Rong Wei
Manycore Tech Inc.
Y
Yi Wang
Zhejiang University, Manycore Tech Inc.
R
Rui Tang
Manycore Tech Inc.
M
Mingli Song
Zhejiang University, Manycore Tech Inc.
J
Jie Song
Zhejiang University, Manycore Tech Inc.