🤖 AI Summary
To address expert load imbalance and capability conflicts in Mixture-of-Experts Vision-Language Models (MoE-VLMs) induced by multi-task heterogeneity, this paper proposes a novel MoE-VLM framework explicitly designed for heterogeneous multi-task learning. Methodologically, it introduces: (1) a Heterogeneous Expert Coordination Matrix to explicitly model complementary relationships among experts; (2) a Progressive Contrastive Pre-alignment mechanism that jointly aligns detection, segmentation, classification, and captioning experts within a unified latent space; and (3) a probabilistic stochastic residual connection coupled with an adaptive weight allocator to enable dynamic knowledge fusion and load balancing. Evaluated on 12 cross-modal benchmarks, the framework achieves an average improvement of 4.7% over state-of-the-art methods. It also provides the first empirical validation of progressive pre-alignment as an effective strategy for building general-purpose multimodal agents.
📝 Abstract
Vision-Language Models (VLMs) based on Mixture-of-Experts (MoE) architectures have emerged as a pivotal paradigm in multimodal understanding, offering a powerful framework for integrating visual and linguistic information. However, the increasing complexity and diversity of tasks present significant challenges in coordinating load balancing across heterogeneous visual experts, where optimizing one specialist's performance often compromises others' capabilities. To address task heterogeneity and expert load imbalance, we propose Astrea, a novel multi-expert collaborative VLM architecture based on progressive pre-alignment. Astrea introduces three key innovations: 1) A heterogeneous expert coordination mechanism that integrates four specialized models (detection, segmentation, classification, captioning) into a comprehensive expert matrix covering essential visual comprehension elements; 2) A dynamic knowledge fusion strategy featuring progressive pre-alignment to harmonize experts within the VLM latent space through contrastive learning, complemented by probabilistically activated stochastic residual connections to preserve knowledge continuity; 3) An enhanced optimization framework utilizing momentum contrastive learning for long-range dependency modeling and adaptive weight allocators for real-time expert contribution calibration. Extensive evaluations across 12 benchmark tasks spanning VQA, image captioning, and cross-modal retrieval demonstrate Astrea's superiority over state-of-the-art models, achieving an average performance gain of +4.7%. This study provides the first empirical demonstration that progressive pre-alignment strategies enable VLMs to overcome task heterogeneity limitations, establishing new methodological foundations for developing general-purpose multimodal agents.