DaMo: Data Mixing Optimizer in Fine-tuning Multimodal LLMs for Mobile Phone Agents

📅 2025-10-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods struggle to identify the optimal multimodal training data mixture for multi-task supervised fine-tuning (SFT), limiting the multi-task coordination capability of multimodal large language models (MLLMs) in mobile phone agents (MPAs). To address this, we propose DaMo, a Data Mixing Optimizer: the first learnable data-ratio prediction network that enables end-to-end modeling and extrapolation-based optimization of multi-task SFT data mixing strategies. To support rigorous evaluation, we introduce PhoneAgentBench—a high-quality, domain-specific benchmark comprising 1,235 real-world mobile interaction question-answer pairs. Experiments demonstrate that DaMo achieves a +3.38% improvement on PhoneAgentBench, an average +2.57% cross-benchmark gain, and a +12.47% boost on BFCL-v3—while maintaining compatibility across diverse MLLM architectures and exhibiting strong scalability.

Technology Category

Application Category

📝 Abstract
Mobile Phone Agents (MPAs) have emerged as a promising research direction due to their broad applicability across diverse scenarios. While Multimodal Large Language Models (MLLMs) serve as the foundation for MPAs, their effectiveness in handling multiple mobile phone tasks simultaneously remains limited. Although multitask supervised fine-tuning (SFT) is widely adopted for multitask learning, existing approaches struggle to determine optimal training data compositions for peak performance. To address this challenge, we propose DaMo (Data Mixture Optimizer) - a novel solution employing a trainable network that predicts optimal data mixtures by forecasting downstream task performance for any given dataset ratio. To support comprehensive evaluation, we introduce PhoneAgentBench, the first specialized benchmark to evaluate MLLMs on multimodal mobile phone tasks, comprising 1235 QA pairs spanning diverse real-world industrial mobile application scenarios. Demonstrating strong predictive capability (R^2=0.81) in small-scale pilot experiments, DaMo efficiently extrapolates optimal data mixing configurations. Our results show DaMo achieves a 3.38% performance improvement on PhoneAgentBench compared to alternative methods. Furthermore, extensive experiments across established benchmarks including BFCL-v3, MME-Reasoning, MME-Perception, and OCRBench reveal DaMo's superior generalization, outperforming other approaches by 2.57% in terms of average score. When used solely for MLLM optimization on the BFCL-v3 task, DaMo improves the metrics by 12.47% than other methods. Notably, DaMo maintains robust scalability, preserving its effectiveness when applied to other model architectures. The code and dataset are available at https://github.com/OPPO-Mente-Lab/DaMo.git
Problem

Research questions and friction points this paper is trying to address.

Optimizing data mixtures for multitask fine-tuning of multimodal LLMs
Predicting optimal training data compositions for mobile phone agents
Improving MLLM performance across diverse mobile application scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

DaMo predicts optimal data mixtures for fine-tuning
Introduces PhoneAgentBench for multimodal mobile task evaluation
Achieves performance gains across multiple established benchmarks
🔎 Similar Papers
No similar papers found.
Kai Shi
Kai Shi
Microsoft
Fiber OpticsSemiconductor LasersOptical Communication Systems
J
Jun Yang
OPPO AI Center
N
Ni Yang
OPPO AI Center
B
Binqiang Pan
OPPO AI Center
Q
Qingsong Xie
OPPO AI Center
C
Chao Zhang
OPPO AI Center
Z
Zhenyu Yang
OPPO AI Center
T
Tianhuang Su
OPPO AI Center
H
Haonan Lu
OPPO AI Center