MAPS: Preserving Vision-Language Representations via Module-Wise Proximity Scheduling for Better Vision-Language-Action Generalization

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Fine-tuning vision-language-action (VLA) models often degrades the generalization priors of pre-trained vision-language models (VLMs). Existing strategies—such as full parameter freezing or uniform regularization—struggle to balance stability and adaptability. To address this, we propose MAPS (Modular Approximate Parameter Scheduling), a module-level approximate regularization framework. MAPS introduces a phased relaxation mechanism that dynamically adjusts regularization strength per module—e.g., visual encoder, language-action head—according to their functional roles, without introducing extra parameters or data. Guided empirically, it preserves pre-trained representations while enabling task-specific adaptation. Evaluated across diverse benchmarks—including MiniVLA, OpenVLA, SimplerEnv, CALVIN, and LIBERO—MAPS achieves up to +30% performance gain. Furthermore, real-world validation on the Franka Emika Panda robotic platform confirms its practical efficacy in embodied action execution.

Technology Category

Application Category

📝 Abstract
Vision-Language-Action (VLA) models inherit strong priors from pretrained Vision-Language Models (VLMs), but naive fine-tuning often disrupts these representations and harms generalization. Existing fixes -- freezing modules or applying uniform regularization -- either overconstrain adaptation or ignore the differing roles of VLA components. We present MAPS (Module-Wise Proximity Scheduling), the first robust fine-tuning framework for VLAs. Through systematic analysis, we uncover an empirical order in which proximity constraints should be relaxed to balance stability and flexibility. MAPS linearly schedules this relaxation, enabling visual encoders to stay close to their pretrained priors while action-oriented language layers adapt more freely. MAPS introduces no additional parameters or data, and can be seamlessly integrated into existing VLAs. Across MiniVLA-VQ, MiniVLA-OFT, OpenVLA-OFT, and challenging benchmarks such as SimplerEnv, CALVIN, LIBERO, as well as real-world evaluations on the Franka Emika Panda platform, MAPS consistently boosts both in-distribution and out-of-distribution performance (up to +30%). Our findings highlight empirically guided proximity to pretrained VLMs as a simple yet powerful principle for preserving broad generalization in VLM-to-VLA transfer.
Problem

Research questions and friction points this paper is trying to address.

Preserving vision-language representations during VLA fine-tuning
Balancing stability and flexibility in VLA adaptation
Preventing disruption of pretrained priors in VLA models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Module-wise proximity scheduling for VLA fine-tuning
Linear relaxation of constraints balances stability and flexibility
No extra parameters or data needed for integration
🔎 Similar Papers
No similar papers found.