LaCo: Efficient Layer-wise Compression of Visual Tokens for Multimodal Large Language Models

📅 2025-07-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal large language models (MLLMs) commonly rely on post-encoder visual token compression, limiting efficiency gains. To address this, this paper proposes a novel layer-wise compression framework embedded within the visual encoder’s intermediate layers. Our approach innovatively integrates pixel-reordering–driven spatial-channel transformation with parameter-free residual shortcuts, enabling dynamic token compression during encoding while preserving both accuracy and efficiency. By avoiding auxiliary modules, the method eliminates associated computational overhead: training efficiency improves by over 20%, and inference throughput increases by more than 15%. Extensive experiments across diverse downstream tasks and model scales demonstrate consistent and significant superiority over state-of-the-art external compression methods. The proposed framework establishes a scalable architectural paradigm for efficient multimodal understanding.

Technology Category

Application Category

📝 Abstract
Existing visual token compression methods for Multimodal Large Language Models (MLLMs) predominantly operate as post-encoder modules, limiting their potential for efficiency gains. To address this limitation, we propose LaCo (Layer-wise Visual Token Compression), a novel framework that enables effective token compression within the intermediate layers of the vision encoder. LaCo introduces two core components: 1) a layer-wise pixel-shuffle mechanism that systematically merges adjacent tokens through space-to-channel transformations, and 2) a residual learning architecture with non-parametric shortcuts that preserves critical visual information during compression. Extensive experiments indicate that our LaCo outperforms all existing methods when compressing tokens in the intermediate layers of the vision encoder, demonstrating superior effectiveness. In addition, compared to external compression, our method improves training efficiency beyond 20% and inference throughput over 15% while maintaining strong performance.
Problem

Research questions and friction points this paper is trying to address.

Enhances visual token compression in MLLM intermediate layers
Introduces pixel-shuffle and residual learning for efficient compression
Boosts training and inference efficiency while maintaining performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Layer-wise token compression within vision encoder
Pixel-shuffle merges adjacent tokens efficiently
Residual learning preserves critical visual information
🔎 Similar Papers
No similar papers found.
J
Juntao Liu
Pattern Recognition Center, WeChat AI, Tencent Inc, China
Liqiang Niu
Liqiang Niu
WeChat AI, Tencent
natural language processingmachine learningdeep learning
W
Wenchao Chen
Pattern Recognition Center, WeChat AI, Tencent Inc, China
J
Jie Zhou
Pattern Recognition Center, WeChat AI, Tencent Inc, China
Fandong Meng
Fandong Meng
WeChat AI, Tencent
Machine TranslationNatural Language Processing