Principles of Visual Tokens for Efficient Video Understanding

📅 2024-11-20
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational cost and severe token redundancy of Transformers in video understanding, this work systematically uncovers intrinsic patterns of visual tokens and, for the first time, establishes five universal principles—most notably, that token importance follows a Pareto distribution. Guided by these principles, we propose LITE, a lightweight video model featuring a plug-and-play, retraining-free token selection mechanism. LITE integrates Pareto-driven importance modeling with efficient architectural optimization. On Kinetics-400 and Something-Something-V2, it surpasses state-of-the-art methods at significantly lower GFLOPs. Crucially, its token selection strategy exhibits zero-shot generalization across datasets and tasks—without fine-tuning—demonstrating strong transferability. This work introduces a novel, scalable paradigm for efficient video understanding.

Technology Category

Application Category

📝 Abstract
Video understanding has made huge strides in recent years, relying largely on the power of transformers. As this architecture is notoriously expensive and video data is highly redundant, research into improving efficiency has become particularly relevant. Some creative solutions include token selection and merging. While most methods succeed in reducing the cost of the model and maintaining accuracy, an interesting pattern arises: most methods do not outperform the baseline of randomly discarding tokens. In this paper we take a closer look at this phenomenon and observe 5 principles of the nature of visual tokens. For example, we observe that the value of tokens follows a clear Pareto-distribution where most tokens have remarkably low value, and just a few carry most of the perceptual information. We build on these and further insights to propose a lightweight video model, LITE, that can select a small number of tokens effectively, outperforming state-of-the-art and existing baselines across datasets (Kinetics-400 and Something-Something-V2) in the challenging trade-off of computation (GFLOPs) vs accuracy. Experiments also show that LITE generalizes across datasets and even other tasks without the need for retraining.
Problem

Research questions and friction points this paper is trying to address.

Improving efficiency in video understanding models
Addressing redundancy in video data processing
Optimizing token selection for better performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Pareto-distribution for token selection
Lightweight model LITE for efficient processing
Outperforms state-of-the-art in computation vs accuracy
🔎 Similar Papers
No similar papers found.
X
Xinyue Hao
University of Edinburgh
G
Gen Li
University of Edinburgh
S
Shreyank N. Gowda
University of Nottingham
R
Robert B Fisher
University of Edinburgh
J
Jonathan Huang
Scaled Foundations
Anurag Arnab
Anurag Arnab
Google DeepMind
Computer VisionMachine LearningDeep Learning
Laura Sevilla-Lara
Laura Sevilla-Lara
Reader at University of Edinburgh
Computer Vision