When Tokens Talk Too Much: A Survey of Multimodal Long-Context Token Compression across Images, Videos, and Audios

📅 2025-07-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large language models (MLLMs) face significant efficiency bottlenecks when processing long-context inputs—such as high-resolution images, lengthy videos, and extended audio—due to the quadratic computational complexity of self-attention. To address this, we propose the first unified taxonomy for multimodal long-context token compression, jointly organized along two dimensions: (i) modality-specific redundancy characteristics (spatial, spatio-temporal, and spectral), and (ii) technical mechanisms (transform-based, similarity-based, attention-based, and query-based paradigms). We systematically survey existing approaches, identify core challenges—including fidelity preservation, cross-modal alignment, and adaptive compression—and outline promising future research directions. Furthermore, we publicly release a dynamically updated multimodal compression knowledge base. This work establishes the first structured theoretical foundation and practical guideline for efficient multimodal long-context modeling.

Technology Category

Application Category

📝 Abstract
Multimodal large language models (MLLMs) have made remarkable strides, largely driven by their ability to process increasingly long and complex contexts, such as high-resolution images, extended video sequences, and lengthy audio input. While this ability significantly enhances MLLM capabilities, it introduces substantial computational challenges, primarily due to the quadratic complexity of self-attention mechanisms with numerous input tokens. To mitigate these bottlenecks, token compression has emerged as an auspicious and critical approach, efficiently reducing the number of tokens during both training and inference. In this paper, we present the first systematic survey and synthesis of the burgeoning field of multimodal long context token compression. Recognizing that effective compression strategies are deeply tied to the unique characteristics and redundancies of each modality, we categorize existing approaches by their primary data focus, enabling researchers to quickly access and learn methods tailored to their specific area of interest: (1) image-centric compression, which addresses spatial redundancy in visual data; (2) video-centric compression, which tackles spatio-temporal redundancy in dynamic sequences; and (3) audio-centric compression, which handles temporal and spectral redundancy in acoustic signals. Beyond this modality-driven categorization, we further dissect methods based on their underlying mechanisms, including transformation-based, similarity-based, attention-based, and query-based approaches. By providing a comprehensive and structured overview, this survey aims to consolidate current progress, identify key challenges, and inspire future research directions in this rapidly evolving domain. We also maintain a public repository to continuously track and update the latest advances in this promising area.
Problem

Research questions and friction points this paper is trying to address.

Address computational challenges in multimodal long-context processing
Survey token compression methods for images, videos, and audios
Categorize compression approaches by modality and underlying mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Token compression reduces multimodal input tokens.
Modality-specific methods target unique data redundancies.
Survey categorizes compression by mechanism and modality.
🔎 Similar Papers
No similar papers found.