🤖 AI Summary
To address the token explosion problem and the trade-off between streaming capability and long-range reasoning in vision-language models processing long videos, this paper proposes a disentangled spatiotemporal compression architecture. It decomposes video features into spatial and temporal dimensions and compresses each independently via learnable modules, followed by end-to-end training of a fixed-length projector. This work introduces the first fixed-length representation paradigm based on spatiotemporal disentanglement, overcoming the fidelity-efficiency trade-off inherent in conventional pooling methods. Evaluated on multiple video-language understanding benchmarks, our approach matches or surpasses state-of-the-art pooling techniques while achieving substantial efficiency gains: 2.1× higher throughput and 58% reduced GPU memory consumption. The method establishes a scalable, efficient paradigm for streaming long-video understanding.
📝 Abstract
Recent advances in vision-language models (VLMs) have shown great promise in connecting images and text, but extending these models to long videos remains challenging due to the rapid growth in token counts. Models that compress videos by local aggregation in time or space have become popular for handling long-form inputs; however, these pooling-based projectors sacrifice the benefits of fixed-length representations that are crucial for streaming and efficient video understanding. We introduce $ exttt{Espresso}$, a new architecture that separately compresses spatial and temporal features into fixed-length sequences. $ exttt{Espresso}$ enables efficient video encoding while maintaining strong long-form reasoning capabilities. Experiments show that fixed-length compression combined with segment-wise processing offers a scalable and competitive alternative to pooling-based approaches. Our results demonstrate that fixed-length projectors, when properly designed and trained, remain a viable foundation for video-language modeling.