🤖 AI Summary
To address the high computational cost and inefficiency of frame-level feature extraction in video understanding under high temporal resolution—hindering time-intensive inference—this paper proposes ResidualViT, an efficient temporally dense video encoding framework. Its key contributions are: (1) a Vision Transformer (ViT) architecture incorporating learnable residual connections, enabling plug-and-play reuse of pre-trained weights; (2) a lightweight token reduction module that explicitly models and eliminates inter-frame temporal redundancy; and (3) a feature-reconstruction-based lightweight knowledge distillation strategy to closely approximate the performance of the original model. Extensive experiments across four video understanding tasks and five benchmark datasets demonstrate that ResidualViT reduces computational cost by up to 60%, accelerates inference by 2.5×, and incurs less than 1.2% accuracy degradation—significantly outperforming existing efficient video encoding approaches.
📝 Abstract
Several video understanding tasks, such as natural language temporal video grounding, temporal activity localization, and audio description generation, require "temporally dense" reasoning over frames sampled at high temporal resolution. However, computing frame-level features for these tasks is computationally expensive given the temporal resolution requirements. In this paper, we make three contributions to reduce the cost of computing features for temporally dense tasks. First, we introduce a vision transformer (ViT) architecture, dubbed ResidualViT, that leverages the large temporal redundancy in videos to efficiently compute temporally dense frame-level features. Our architecture incorporates (i) learnable residual connections that ensure temporal consistency across consecutive frames and (ii) a token reduction module that enhances processing speed by selectively discarding temporally redundant information while reusing weights of a pretrained foundation model. Second, we propose a lightweight distillation strategy to approximate the frame-level features of the original foundation model. Finally, we evaluate our approach across four tasks and five datasets, in both zero-shot and fully supervised settings, demonstrating significant reductions in computational cost (up to 60%) and improvements in inference speed (up to 2.5x faster), all while closely approximating the accuracy of the original foundation model.