🤖 AI Summary
Existing visual fine-tuning approaches in multimodal large language models often suffer from unstable performance due to conflicts in visual preference and context-agnostic encoding, frequently failing to surpass the baseline with a frozen visual encoder. This work proposes CoVFT, a context-aware visual fine-tuning framework that explicitly incorporates multimodal context into the visual encoder adaptation process. CoVFT introduces Context Vector Extraction (CVE) to capture contextual signals and employs a Contextual Mixture-of-Experts (CoMoE) module to disentangle conflicting optimization objectives, enabling stable and context-sensitive visual adaptation. Evaluated across 12 benchmarks, CoVFT achieves state-of-the-art performance, with its 7B-parameter model outperforming larger 13B-parameter counterparts on average, thereby substantially unlocking the potential of smaller-scale models.
📝 Abstract
Multimodal large language models (MLLMs) achieve remarkable progress in cross-modal perception and reasoning, yet a fundamental question remains unresolved: should the vision encoder be fine-tuned or frozen? Despite the success of models such as LLaVA and Qwen-VL, inconsistent design choices and heterogeneous training setups hinder a unified understanding of visual fine-tuning (VFT) in MLLMs. Through a configuration-aligned benchmark, we find that existing VFT methods fail to consistently outperform the frozen baseline across multimodal tasks. Our analysis suggests that this instability arises from visual preference conflicts, where the context-agnostic nature of vision encoders induces divergent parameter updates under diverse multimodal context. To address this issue, we propose the Context-aware Visual Fine-tuning (CoVFT) framework, which explicitly incorporates multimodal context into visual adaptation. By integrating a Context Vector Extraction (CVE) and a Contextual Mixture-of-Experts (CoMoE) module, CoVFT decomposes conflicting optimization signals and enables stable, context-sensitive visual updates. Extensive experiments on 12 multimodal benchmarks demonstrate that CoVFT achieves state-of-the-art performance with superior stability. Notably, fine-tuning a 7B MLLM with CoVFT surpasses the average performance of its 13B counterpart, revealing substantial untapped potential in visual encoder optimization within MLLMs.