π€ AI Summary
This work addresses the inefficiency of audio-visual large language models (AV-LLMs), whose inference is hindered by excessive token counts from multimodal inputs, while existing pruning methods fail to account for their unique characteristics. To this end, we propose FastAVβthe first efficient token pruning framework tailored for AV-LLMs. FastAV identifies critical tokens through attention weight analysis and applies global pruning in intermediate layers followed by fine-grained pruning in later layers, balancing computational efficiency with generation quality. Notably, it operates without requiring full attention maps, making it compatible with efficient attention mechanisms such as FlashAttention. Evaluated on two mainstream AV-LLMs, FastAV reduces FLOPs by over 40% while maintaining or even improving model performance.
π Abstract
In this work, we present FastAV, the first token pruning framework tailored for audio-visual large language models (AV-LLMs). While token pruning has been actively explored in standard large language models (LLMs) and vision-language models (LVLMs), its application to AV-LLMs has received little attention, even though multimodal integration substantially increases their token demands. To address this gap, we introduce a pruning strategy that utilizes attention weights to identify tokens emphasized at different stages and estimates their importance. Building on this analysis, FastAV applies a two-stage pruning strategy: (1) global pruning in intermediate layers to remove broadly less influential tokens, and (2) fine pruning in later layers considering the impact on next token generation. Notably, our method does not rely on full attention maps, which makes it fully compatible with efficient attention mechanisms such as FlashAttention. Extensive experiments demonstrate that FastAV reduces FLOPs by more than 40% on two representative AV-LLMs, while preserving or even improving model performance.