A Rolling Stone Gathers No Moss: Adaptive Policy Optimization for Stable Self-Evaluation in Large Multimodal Models

📅 2025-08-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large multimodal models (LMMs) lack intrinsic self-assessment capabilities in multi-turn dialogues, and existing reinforcement learning approaches suffer from reward hacking and policy collapse due to static reward functions. To address these issues, we propose AdaPO—a novel framework featuring adaptive reward modeling and reward-aware dynamic KL regularization, enabling real-time objective adjustment and fine-grained control during training. AdaPO integrates online reinforcement learning, multi-turn trajectory distribution analysis, and a fully automated, human-in-the-loop-free optimization mechanism. Evaluated across eight diverse benchmarks and multiple LMM architectures, AdaPO consistently improves reasoning accuracy and self-assessment consistency while enhancing training stability and cross-task generalization. Our method establishes a new paradigm for sustainable, autonomous optimization of LMMs, advancing robust and reliable multimodal dialogue systems.

Technology Category

Application Category

📝 Abstract
Self-evaluation, a model's ability to assess the correctness of its own output, is crucial for Large Multimodal Models (LMMs) to achieve self-improvement in multi-turn conversations, yet largely absent in foundation models. Recent work has employed reinforcement learning (RL) to enhance self-evaluation; however, its fixed reward mechanism suffers from reward hacking when optimizing multiple training objectives, leading to model collapse. In this paper we propose AdaPO, an online reinforcement learning framework capable of adaptively adjusting training objective in real time according to the current training state for each task. Specifically, to mitigate reward hacking , AdaPO introduces an Adaptive Reward Model (ARM) and a Reward Aware Dynamic KL Regularization mechanism. ARM assesses the task's training state from the distribution of model generated multi-turn trajectories' performance. Reward Aware Dynamic KL replaces a fixed penalty with dynamic coefficients which is modulated by the reward gap between different multi-turn situations. Notably, our method automatically and smoothly adjusts its learning focus based on sub-tasks' training progress without manual intervention. Extensive experiments over 8 benchmarks and various models show that our method significantly enhances both direct reasoning and self-evaluation capability. We will release our code to contribute to the community.
Problem

Research questions and friction points this paper is trying to address.

Enhancing self-evaluation in Large Multimodal Models (LMMs)
Preventing reward hacking in multi-objective reinforcement learning
Adaptively adjusting training objectives for stable model optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive Policy Optimization for stable self-evaluation
Adaptive Reward Model to mitigate reward hacking
Dynamic KL Regularization with reward-aware coefficients
🔎 Similar Papers
No similar papers found.