FastAV: Efficient Token Pruning for Audio-Visual Large Language Model Inference

πŸ“… 2026-01-19
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the inefficiency of audio-visual large language models (AV-LLMs), whose inference is hindered by excessive token counts from multimodal inputs, while existing pruning methods fail to account for their unique characteristics. To this end, we propose FastAVβ€”the first efficient token pruning framework tailored for AV-LLMs. FastAV identifies critical tokens through attention weight analysis and applies global pruning in intermediate layers followed by fine-grained pruning in later layers, balancing computational efficiency with generation quality. Notably, it operates without requiring full attention maps, making it compatible with efficient attention mechanisms such as FlashAttention. Evaluated on two mainstream AV-LLMs, FastAV reduces FLOPs by over 40% while maintaining or even improving model performance.

Technology Category

Application Category

πŸ“ Abstract
In this work, we present FastAV, the first token pruning framework tailored for audio-visual large language models (AV-LLMs). While token pruning has been actively explored in standard large language models (LLMs) and vision-language models (LVLMs), its application to AV-LLMs has received little attention, even though multimodal integration substantially increases their token demands. To address this gap, we introduce a pruning strategy that utilizes attention weights to identify tokens emphasized at different stages and estimates their importance. Building on this analysis, FastAV applies a two-stage pruning strategy: (1) global pruning in intermediate layers to remove broadly less influential tokens, and (2) fine pruning in later layers considering the impact on next token generation. Notably, our method does not rely on full attention maps, which makes it fully compatible with efficient attention mechanisms such as FlashAttention. Extensive experiments demonstrate that FastAV reduces FLOPs by more than 40% on two representative AV-LLMs, while preserving or even improving model performance.
Problem

Research questions and friction points this paper is trying to address.

audio-visual large language models
token pruning
multimodal integration
inference efficiency
AV-LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

token pruning
audio-visual LLMs
attention-based importance estimation
two-stage pruning
efficient inference
πŸ”Ž Similar Papers
No similar papers found.