KiToke: Kernel-based Interval-aware Token Compression for Video Large Language Models

📅 2026-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high computational cost of video large language models caused by the excessive number of visual tokens. To this end, the authors propose a training-free, query-agnostic token compression method that introduces, for the first time, a kernel-based global redundancy metric to enable content-adaptive selection of critical tokens. Additionally, they design a lightweight temporal interval construction mechanism coupled with an interval-aware temporal token merging strategy to effectively preserve essential visual content and temporal coherence. This approach overcomes the limitations of existing local or segment-wise heuristic compression techniques, achieving substantial performance gains across multiple video understanding benchmarks and diverse model backbones—even under extreme compression ratios retaining only 1% of the original tokens—while significantly outperforming current training-free compression methods.
📝 Abstract
Video Large Language Models (Video LLMs) achieve strong performance on video understanding tasks but suffer from high inference costs due to the large number of visual tokens. We propose KiToke, a training-free, query-agnostic token compression approach that reduces spatiotemporal redundancy while preserving critical visual information. Our method estimates token diversity globally using a kernel-based redundancy measure, enabling content-adaptive selection that remains effective under extreme token budgets, and further introduces a lightweight temporal interval construction with interval-aware token merging to maintain temporal coherence. Unlike prior methods that rely on local or segment-level heuristics, KiToke explicitly captures global redundancy across an entire video, leading to more efficient token utilization. Extensive experiments on multiple video understanding benchmarks and Video LLM backbones demonstrate that KiToke consistently outperforms existing training-free compression methods, with particularly large gains at aggressive retention ratios down to 1%.
Problem

Research questions and friction points this paper is trying to address.

Video Large Language Models
token compression
inference cost
visual tokens
spatiotemporal redundancy
Innovation

Methods, ideas, or system contributions that make the work stand out.

kernel-based redundancy
interval-aware token merging
global token diversity
training-free compression
temporal coherence
🔎 Similar Papers
No similar papers found.