MoDoMoDo: Multi-Domain Data Mixtures for Multimodal LLM Reinforcement Learning

📅 2025-05-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large language models (MLLMs) suffer from cross-task reward conflicts and poor generalization when fine-tuned via reinforcement learning (RL) on multi-domain data. Method: This work formally defines the multi-domain data mixture optimization problem and proposes an adaptive data mixing strategy that predicts RL fine-tuning performance via hybrid distribution modeling. The approach integrates domain-specific verifiable reward design, online multi-task RL, data distribution modeling and optimization, and unifies them into a cohesive multimodal RLVR training framework. Contribution/Results: Experiments demonstrate that the proposed strategy improves accuracy by 5.24% over uniform data mixing and by 20.74% over the baseline on out-of-distribution (OOD) benchmarks. It significantly enhances cross-domain reasoning robustness and generalization capability of MLLMs, establishing a principled foundation for scalable, domain-agnostic multimodal RL optimization.

Technology Category

Application Category

📝 Abstract
Reinforcement Learning with Verifiable Rewards (RLVR) has recently emerged as a powerful paradigm for post-training large language models (LLMs), achieving state-of-the-art performance on tasks with structured, verifiable answers. Applying RLVR to Multimodal LLMs (MLLMs) presents significant opportunities but is complicated by the broader, heterogeneous nature of vision-language tasks that demand nuanced visual, logical, and spatial capabilities. As such, training MLLMs using RLVR on multiple datasets could be beneficial but creates challenges with conflicting objectives from interaction among diverse datasets, highlighting the need for optimal dataset mixture strategies to improve generalization and reasoning. We introduce a systematic post-training framework for Multimodal LLM RLVR, featuring a rigorous data mixture problem formulation and benchmark implementation. Specifically, (1) We developed a multimodal RLVR framework for multi-dataset post-training by curating a dataset that contains different verifiable vision-language problems and enabling multi-domain online RL learning with different verifiable rewards; (2) We proposed a data mixture strategy that learns to predict the RL fine-tuning outcome from the data mixture distribution, and consequently optimizes the best mixture. Comprehensive experiments showcase that multi-domain RLVR training, when combined with mixture prediction strategies, can significantly boost MLLM general reasoning capacities. Our best mixture improves the post-trained model's accuracy on out-of-distribution benchmarks by an average of 5.24% compared to the same model post-trained with uniform data mixture, and by a total of 20.74% compared to the pre-finetuning baseline.
Problem

Research questions and friction points this paper is trying to address.

Optimizing dataset mixtures for Multimodal LLM reinforcement learning
Addressing conflicting objectives in multi-dataset MLLM training
Enhancing generalization and reasoning in vision-language tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal RLVR framework for multi-dataset training
Data mixture strategy optimizing RL outcomes
Multi-domain online RL with verifiable rewards
🔎 Similar Papers
No similar papers found.