DyCoke: Dynamic Compression of Tokens for Fast Video Large Language Models

๐Ÿ“… 2024-11-22
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 4
โœจ Influential: 1
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Video large language models (VLLMs) suffer from slow inference and high GPU memory consumption due to the generation of massive visual tokens from video inputs. To address this, we propose a **training-free, decoding-time dynamic visual token compression** method operating along two dimensions: temporal fusion of tokens across key frames and spatially adaptive pruning of the KV cacheโ€”enabling progressive retention of critical tokens at each decoding step. Our approach introduces, for the first time, a plug-in temporal compression module and a dynamic KV reduction mechanism, overcoming the limitations of static pruning that often discards semantically important tokens. Extensive experiments on multiple video understanding benchmarks demonstrate that our method achieves **1.5ร— inference speedup and 1.4ร— GPU memory reduction**, while consistently outperforming the baseline in accuracy. This significantly enhances the deployment efficiency of VLLMs without architectural or training modifications.

Technology Category

Application Category

๐Ÿ“ Abstract
Video large language models (VLLMs) have significantly advanced recently in processing complex video content, yet their inference efficiency remains constrained because of the high computational cost stemming from the thousands of visual tokens generated from the video inputs. We empirically observe that, unlike single image inputs, VLLMs typically attend visual tokens from different frames at different decoding iterations, making a one-shot pruning strategy prone to removing important tokens by mistake. Motivated by this, we present DyCoke, a training-free token compression method to optimize token representation and accelerate VLLMs. DyCoke incorporates a plug-and-play temporal compression module to minimize temporal redundancy by merging redundant tokens across frames, and applies dynamic KV cache reduction to prune spatially redundant tokens selectively. It ensures high-quality inference by dynamically retaining the critical tokens at each decoding step. Extensive experimental results demonstrate that DyCoke can outperform the prior SoTA counterparts, achieving 1.5X inference speedup, 1.4X memory reduction against the baseline VLLM, while still improving the performance, with no training.
Problem

Research questions and friction points this paper is trying to address.

Reduces computational cost of video large language models
Dynamically compresses visual tokens to avoid pruning errors
Improves inference speed and memory efficiency without training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic token compression for VLLMs
Plug-and-play temporal redundancy reduction
Dynamic KV cache pruning for efficiency
๐Ÿ”Ž Similar Papers
No similar papers found.