🤖 AI Summary
This work addresses the high memory overhead and low inference efficiency of large vision-language models during decoding, particularly in long-context tasks involving multiple high-resolution images or videos, where lengthy sequences of visual and textual tokens strain computational resources. To tackle this challenge, the authors propose AttentionPack, a novel framework that exploits the implicit low-rank structure of key-value matrices in multi-head attention. AttentionPack introduces an attention-aware key-value cache compression scheme coupled with token-level decompression, further enhanced by cache eviction, quantization, and operator optimization techniques. The method achieves up to 8× improvement in memory efficiency across multiple benchmarks while preserving model output quality and retrieval performance, substantially boosting batch throughput and enabling larger batch sizes, longer context lengths, or faster inference.
📝 Abstract
Large Vision-Language Models (VLMs) have achieved remarkable success in multi-modal reasoning, but their inference time efficiency remains a significant challenge due to the memory overhead during decoding, especially when the query and answer of VLMs consist of long sequences of visual and text tokens. This paper presents AttentionPack, an adaptive and attention-aware optimization framework tailored for large vision-language models with improving memory-efficiency during decoding, focusing on addressing the challenges due to the increased high number of visual inputs and interactions, particularly in long-context tasks with multiple high-resolution images or videos. AttentionPack is novel in two aspects: (i) We introduce a multi-head attention compaction method for economically storing key and value matrices by exploiting the implicit low-rank structure, and (ii) we develop a token-specific attention-aware decompression mechanism to reduce latency overhead. Experimental results on multiple benchmarks demonstrate that AttentionPack improves memory efficiency by up to 8x, enabling higher batch sizes and faster batch inference while preserving the model output quality or longer context lengths for superior retrieval performance. We also report the effectiveness of AttentionPack combined with eviction, quantization and kernel fusion, showing further efficiency gains for resource-limited environments.