MARS: Harmonizing Multimodal Convergence via Adaptive Rank Search

📅 2026-02-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the performance degradation of multimodal large language models during parameter-efficient fine-tuning, which often stems from imbalanced training dynamics across modalities and reliance on inefficient manual hyperparameter tuning. To overcome this, the authors propose MARS, a novel method that introduces a dual scaling law framework: it models the convergence time of individual modality-specific modules to align their training dynamics and simultaneously predicts task performance to automatically search for the optimal pair of LoRA ranks. This approach enables adaptive hyperparameter selection in multimodal fine-tuning, achieving significant improvements over existing baselines across multiple benchmarks while offering automation, robustness, and computational efficiency.

Technology Category

Application Category

📝 Abstract
Fine-tuning Multimodal Large Language Models (MLLMs) with parameter-efficient methods like Low-Rank Adaptation (LoRA) is crucial for task adaptation. However, imbalanced training dynamics across modalities often lead to suboptimal accuracy due to negative interference, a challenge typically addressed with inefficient heuristic methods such as manually tuning separate learning rates. To overcome this, we introduce MARS (Multimodal Adaptive Rank Search), an approach to discover optimal rank pairs that balance training dynamics while maximizing performance. Our key innovation, a proposed framework of dual scaling laws, enables this search: one law models module-specific convergence time to prune the search space to candidates with aligned dynamics, while the other predicts final task performance to select the optimal pair from the pruned set. By re-purposing the LoRA rank as a controller for modality-specific convergence speed, MARS outperforms baseline methods and provides a robust, automated strategy for optimizing MLLM fine-tuning.
Problem

Research questions and friction points this paper is trying to address.

Multimodal Large Language Models
Low-Rank Adaptation
training dynamics imbalance
negative interference
modality convergence
Innovation

Methods, ideas, or system contributions that make the work stand out.

MARS
Low-Rank Adaptation
Multimodal Large Language Models
dual scaling laws
adaptive rank search
🔎 Similar Papers
No similar papers found.