OmniZip: Audio-Guided Dynamic Token Compression for Fast Omnimodal Large Language Models

📅 2025-11-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the computational bottlenecks induced by audio-video token sequences in OmniLLMs, this paper proposes a training-free, audio-guided dynamic multimodal token compression framework. The method leverages audio saliency detection to guide video token pruning, preserves critical semantic information via cross-modal similarity modeling, and introduces a time-group-based audio retention scoring mechanism alongside an interleaved spatiotemporal compression strategy—enabling fine-grained, adaptive multimodal token reduction. To our knowledge, this is the first work to achieve audio-driven dynamic video token compression without model fine-tuning. Experimental results demonstrate substantial improvements in inference efficiency: compared to the current state-of-the-art, our approach achieves a 3.42× speedup in inference latency and reduces memory consumption by 1.4×, while fully preserving original task performance across diverse multimodal benchmarks.

Technology Category

Application Category

📝 Abstract
Omnimodal large language models (OmniLLMs) have attracted increasing research attention of late towards unified audio-video understanding, wherein processing audio-video token sequences creates a significant computational bottleneck, however. Existing token compression methods have yet to accommodate this emerging need of jointly compressing multimodal tokens. To bridge this gap, we present OmniZip, a training-free, audio-guided audio-visual token-compression framework that optimizes multimodal token representation and accelerates inference. Specifically, OmniZip first identifies salient audio tokens, then computes an audio retention score for each time group to capture information density, thereby dynamically guiding video token pruning and preserving cues from audio anchors enhanced by cross-modal similarity. For each time window, OmniZip compresses the video tokens using an interleaved spatio-temporal scheme. Extensive empirical results demonstrate the merits of OmniZip - it achieves 3.42X inference speedup and 1.4X memory reduction over other top-performing counterparts, while maintaining performance with no training.
Problem

Research questions and friction points this paper is trying to address.

Compressing multimodal tokens to reduce computational bottlenecks in omnimodal models
Accelerating inference speed while maintaining performance without requiring training
Dynamically pruning video tokens using audio-guided retention scores
Innovation

Methods, ideas, or system contributions that make the work stand out.

Audio-guided dynamic token compression for multimodal models
Training-free framework optimizing token representation and inference
Interleaved spatio-temporal compression using audio retention scores
🔎 Similar Papers
No similar papers found.
Keda Tao
Keda Tao
Westlake University
Generative ModelComputer VisionMLLM
Kele Shao
Kele Shao
Zhejiang University, Westlake University
MLLMEfficient AIComputer Vision
B
Bohan Yu
Ant Group
W
Weiqiang Wang
Ant Group
J
Jian liu
Ant Group
H
Huan Wang
Westlake University