ACPO: Adaptive Curriculum Policy Optimization for Aligning Vision-Language Models in Complex Reasoning

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the inflexible policy optimization and low sample efficiency of vision-language models in complex reasoning tasks, this paper proposes a reinforcement learning alignment framework. First, a dynamic curriculum mechanism is designed to enable progressive training scheduling—from exploration to exploitation. Second, an advantage-aware adaptive clipping strategy is introduced, performing fine-grained and robust policy updates based on normalized advantage estimates. Third, a dynamic sample reuse mechanism is integrated to enhance data utilization efficiency. Embedded within the Proximal Policy Optimization (PPO) framework, the method significantly improves multimodal reasoning alignment. It achieves state-of-the-art performance on MathVista, LogicVista, and MMMU-Pro benchmarks, with a 23% faster convergence rate and a 37% reduction in training variance—demonstrating superior stability and generalization.

Technology Category

Application Category

📝 Abstract
Aligning large-scale vision-language models (VLMs) for complex reasoning via reinforcement learning is often hampered by the limitations of existing policy optimization algorithms, such as static training schedules and the rigid, uniform clipping mechanism in Proximal Policy Optimization (PPO). In this work, we introduce Adaptive Curriculum Policy Optimization (ACPO), a novel framework that addresses these challenges through a dual-component adaptive learning strategy. First, ACPO employs a dynamic curriculum that orchestrates a principled transition from a stable, near on-policy exploration phase to an efficient, off-policy exploitation phase by progressively increasing sample reuse. Second, we propose an Advantage-Aware Adaptive Clipping (AAAC) mechanism that replaces the fixed clipping hyperparameter with dynamic, sample-wise bounds modulated by the normalized advantage of each token. This allows for more granular and robust policy updates, enabling larger gradients for high-potential samples while safeguarding against destructive ones. We conduct extensive experiments on a suite of challenging multimodal reasoning benchmarks, including MathVista, LogicVista, and MMMU-Pro. Results demonstrate that ACPO consistently outperforms strong baselines such as DAPO and PAPO, achieving state-of-the-art performance, accelerated convergence, and superior training stability.
Problem

Research questions and friction points this paper is trying to address.

Optimizing VLM alignment via adaptive curriculum learning strategies
Replacing rigid PPO clipping with dynamic advantage-aware mechanisms
Enhancing multimodal reasoning performance on complex benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic curriculum transitions from exploration to exploitation
Adaptive clipping mechanism adjusts bounds per sample advantage
Framework enhances policy updates for complex multimodal reasoning
Y
Yunhao Wang
Xiaomi Inc., Beijing, China
Z
Ziting Li
Xiaomi Inc., Beijing, China
S
Shuai Chen
Xiaomi Inc., Beijing, China
T
Tao Liu
Xiaomi Inc., Beijing, China
C
Chao Song
Xiaomi Inc., Beijing, China
Junjie Jiang
Junjie Jiang
Northeastern University(Shenyang, China)
Robot NavigationDeep Reinforcement LearningSpiking Neural NetworksEvent Cameras
J
Jian Zhu
Xiaomi Inc., Beijing, China
P
Peng Gao
Xiaomi Inc., Beijing, China
Bin Qin
Bin Qin
Institute of Software Chinese Academy of Sciences
Machine LearningCausal Inference