Video, How Do Your Tokens Merge?

📅 2025-06-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Video Transformers incur substantial computational overhead due to spatiotemporal modeling, and existing token compression methods are primarily designed for image-based Vision Transformers (ViTs), lacking systematic evaluation on video data. This work proposes the first retraining-free, dynamic token merging method tailored for video ViTs. Leveraging spatiotemporal attention guidance and cross-frame redundancy analysis, our approach adaptively fuses semantically similar tokens while preserving critical information—enabling plug-and-play acceleration. Evaluated on three mainstream video understanding benchmarks, our method achieves an average 2.5× speedup with only a 0.55% top-1 accuracy drop on ViViT, significantly outperforming existing baselines. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Video transformer models require huge amounts of compute resources due to the spatio-temporal scaling of the input. Tackling this, recent methods have proposed to drop or merge tokens for image models, whether randomly or via learned methods. Merging tokens has many benefits: it can be plugged into any vision transformer, does not require model re-training, and it propagates information that would otherwise be dropped through the model. Before now, video token merging has not been evaluated on temporally complex datasets for video understanding. In this work, we explore training-free token merging for video to provide comprehensive experiments and find best practices across four video transformers on three datasets that exhibit coarse and fine-grained action recognition. Our results showcase the benefits of video token merging with a speedup of around $2.5$X while maintaining accuracy (avg. $-0.55%$ for ViViT). Code available at https://github.com/sjpollard/video-how-do-your-tokens-merge.
Problem

Research questions and friction points this paper is trying to address.

Reducing compute resources in video transformers
Evaluating token merging on complex video datasets
Maintaining accuracy while speeding up video processing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free token merging for video
Speedup without accuracy loss
Compatible with various video transformers
🔎 Similar Papers
No similar papers found.