Vision-Centric Activation and Coordination for Multimodal Large Language Models

📅 2025-10-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Mainstream multimodal large language models (MLLMs) rely solely on text-based autoregressive supervision, neglecting the centrality of visual perception and thereby limiting visual understanding capabilities. To address this, we propose VaCo—a vision-centric framework that jointly optimizes image-text representations by integrating fine-grained perceptual features from multiple foundational vision models (e.g., SAM, DINOv2) via activation and coordination mechanisms. Our key contributions are: (1) a vision-discriminative alignment mechanism to enhance cross-modal semantic consistency; (2) learnable modular task queries for improved task adaptability; and (3) a vision alignment layer coupled with token gateway masking to mitigate conflicts among heterogeneous visual features. Evaluated on 12 benchmarks—including MMBench and OCRBench—VaCo consistently improves visual reasoning and fine-grained comprehension performance of leading MLLMs such as Qwen-VL and LLaVA, demonstrating the effectiveness and generalizability of vision-centric modeling.

Technology Category

Application Category

📝 Abstract
Multimodal large language models (MLLMs) integrate image features from visual encoders with LLMs, demonstrating advanced comprehension capabilities. However, mainstream MLLMs are solely supervised by the next-token prediction of textual tokens, neglecting critical vision-centric information essential for analytical abilities. To track this dilemma, we introduce VaCo, which optimizes MLLM representations through Vision-Centric activation and Coordination from multiple vision foundation models (VFMs). VaCo introduces visual discriminative alignment to integrate task-aware perceptual features extracted from VFMs, thereby unifying the optimization of both textual and visual outputs in MLLMs. Specifically, we incorporate the learnable Modular Task Queries (MTQs) and Visual Alignment Layers (VALs) into MLLMs, activating specific visual signals under the supervision of diverse VFMs. To coordinate representation conflicts across VFMs, the crafted Token Gateway Mask (TGM) restricts the information flow among multiple groups of MTQs. Extensive experiments demonstrate that VaCo significantly improves the performance of different MLLMs on various benchmarks, showcasing its superior capabilities in visual comprehension.
Problem

Research questions and friction points this paper is trying to address.

Enhancing multimodal models with vision-centric optimization
Addressing visual information neglect in language models
Coordinating representation conflicts across vision foundation models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates visual discriminative alignment from multiple foundation models
Uses Modular Task Queries and Visual Alignment Layers
Coordinates representations with Token Gateway Mask mechanism
🔎 Similar Papers
Yunnan Wang
Yunnan Wang
Department of Computer Science and Engineering, Shanghai Jiao Tong University
Computer VisionMultimodal Representation Learning
F
Fan Lu
Ant Group
K
Kecheng Zheng
Ant Group
Z
Ziyuan Huang
Ant Group
Ziqiang Li
Ziqiang Li
Associate Professor, Nanjing University of Information Sciences and Technology
AIGCBackdoor LearningAI Security
W
Wenjun Zeng
Ningbo Institute of Digital Twin, Eastern Institute of Technology, Ningbo
X
Xin Jin
Ningbo Institute of Digital Twin, Eastern Institute of Technology, Ningbo