SparseMM: Head Sparsity Emerges from Visual Concept Responses in MLLMs

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the opacity of visual understanding mechanisms and high inference overhead in multimodal large language models (MLLMs). We first discover that MLLM attention heads exhibit high sparsity in visual response—only ~5% contribute substantially. Leveraging this insight, we propose a training-free visual head identification framework and design a heterogeneous key-value (KV) cache sparsification strategy guided by visual importance scoring: full KV states are retained for visually critical heads, while non-critical heads undergo dynamic KV pruning. This enables asymmetric optimization of computation and memory. Evaluated on mainstream multimodal benchmarks, our method achieves 1.38× measured speedup and 52% memory reduction with zero accuracy loss—significantly outperforming generic KV compression approaches. The framework establishes a new paradigm for efficient and interpretable MLLM inference.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) are commonly derived by extending pre-trained Large Language Models (LLMs) with visual capabilities. In this work, we investigate how MLLMs process visual inputs by analyzing their attention mechanisms. We reveal a surprising sparsity phenomenon: only a small subset (approximately less than 5%) of attention heads in LLMs actively contribute to visual understanding, termed visual heads. To identify these heads efficiently, we design a training-free framework that quantifies head-level visual relevance through targeted response analysis. Building on this discovery, we introduce SparseMM, a KV-Cache optimization strategy that allocates asymmetric computation budgets to heads in LLMs based on their visual scores, leveraging the sparity of visual heads for accelerating the inference of MLLMs. Compared with prior KV-Cache acceleration methods that ignore the particularity of visual, SparseMM prioritizes stress and retaining visual semantics during decoding. Extensive evaluations across mainstream multimodal benchmarks demonstrate that SparseMM achieves superior accuracy-efficiency trade-offs. Notably, SparseMM delivers 1.38x real-time acceleration and 52% memory reduction during generation while maintaining performance parity on efficiency test. Our project is open sourced at https://github.com/CR400AF-A/SparseMM.
Problem

Research questions and friction points this paper is trying to address.

Identifies sparse active attention heads in MLLMs for vision
Proposes training-free framework to quantify visual head relevance
Optimizes KV-Cache for faster MLLM inference with minimal accuracy loss
Innovation

Methods, ideas, or system contributions that make the work stand out.

Identifies visual heads via response analysis
Uses KV-Cache for asymmetric computation allocation
Optimizes MLLM inference by head sparsity
🔎 Similar Papers
No similar papers found.