🤖 AI Summary
To address computational redundancy and high resource overhead in multi-visual-encoder vision-language models (VLMs), this paper proposes MoVE-KD, a knowledge distillation framework that efficiently transfers complementary knowledge from multiple teacher visual encoders into a single lightweight student encoder. Our method introduces two key innovations: (1) a novel selective knowledge activation mechanism integrating LoRA with Mixture-of-Experts (MoE), enabling dynamic expert-level feature routing; and (2) an attention-driven adaptive weighted distillation strategy that jointly models teacher-specific representations and visual token importance. Evaluated on LLaVA and LLaVA-NeXT, MoVE-KD significantly enhances the performance of single-encoder VLMs—approaching the accuracy of multi-encoder ensembles—while accelerating inference by 2.1× and reducing GPU memory consumption by 38%.
📝 Abstract
Visual encoders are fundamental components in vision-language models (VLMs), each showcasing unique strengths derived from various pre-trained visual foundation models. To leverage the various capabilities of these encoders, recent studies incorporate multiple encoders within a single VLM, leading to a considerable increase in computational cost. In this paper, we present Mixture-of-Visual-Encoder Knowledge Distillation (MoVE-KD), a novel framework that distills the unique proficiencies of multiple vision encoders into a single, efficient encoder model. Specifically, to mitigate conflicts and retain the unique characteristics of each teacher encoder, we employ low-rank adaptation (LoRA) and mixture-of-experts (MoEs) to selectively activate specialized knowledge based on input features, enhancing both adaptability and efficiency. To regularize the KD process and enhance performance, we propose an attention-based distillation strategy that adaptively weighs the different visual encoders and emphasizes valuable visual tokens, reducing the burden of replicating comprehensive but distinct features from multiple teachers. Comprehensive experiments on popular VLMs, such as LLaVA and LLaVA-NeXT, validate the effectiveness of our method. The code will be released.