D'ej`a Vu: Efficient Video-Language Query Engine with Learning-based Inter-Frame Computation Reuse

📅 2025-06-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Video language models (VideoLMs) suffer from high computational overhead and poor scalability in large-scale video systems due to their reliance on frame-wise ViT encoding. This work introduces ReuseViT—the first vision encoder enabling *learned inter-frame computation reuse*—which dynamically identifies redundant visual computation paths and jointly compresses memory and computation while preserving accuracy. Its core contributions are: (1) an end-to-end trainable inter-frame feature reuse strategy that adaptively skips redundant computations across frames; and (2) a lightweight reuse decision module integrated with a quantization-sparse co-compression mechanism. Evaluated across three representative VideoLM tasks, ReuseViT achieves up to 2.64× inference speedup with an average accuracy drop of less than 2%. The method significantly enhances the practicality and scalability of large-scale video understanding systems without compromising model fidelity.

Technology Category

Application Category

📝 Abstract
Recently, Video-Language Models (VideoLMs) have demonstrated remarkable capabilities, offering significant potential for flexible and powerful video query systems. These models typically rely on Vision Transformers (ViTs), which process video frames individually to extract visual embeddings. However, generating embeddings for large-scale videos requires ViT inferencing across numerous frames, posing a major hurdle to real-world deployment and necessitating solutions for integration into scalable video data management systems. This paper introduces D'ej`a Vu, a video-language query engine that accelerates ViT-based VideoLMs by reusing computations across consecutive frames. At its core is ReuseViT, a modified ViT model specifically designed for VideoLM tasks, which learns to detect inter-frame reuse opportunities, striking an effective balance between accuracy and reuse. Although ReuseViT significantly reduces computation, these savings do not directly translate into performance gains on GPUs. To overcome this, D'ej`a Vu integrates memory-compute joint compaction techniques that convert the FLOP savings into tangible performance gains. Evaluations on three VideoLM tasks show that D'ej`a Vu accelerates embedding generation by up to a 2.64x within a 2% error bound, dramatically enhancing the practicality of VideoLMs for large-scale video analytics.
Problem

Research questions and friction points this paper is trying to address.

Accelerating ViT-based VideoLMs for scalable video analytics
Reducing computation by reusing inter-frame computations efficiently
Enhancing GPU performance via memory-compute joint compaction techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

ReuseViT enables inter-frame computation reuse
Memory-compute joint compaction boosts GPU performance
Achieves 2.64x speedup within 2% error bound
🔎 Similar Papers
No similar papers found.