🤖 AI Summary
This work addresses the challenges in video moment retrieval posed by memory bottlenecks from dense frame processing and critical information loss due to sparse sampling. To overcome these limitations, the authors propose SMORE, a novel framework that integrates query-guided semantic encoding with an adaptive frame compression mechanism. Leveraging a multimodal large language model, SMORE enables query-aware importance modulation to efficiently preserve salient high-resolution video content under constrained memory budgets. By moving beyond conventional sparse sampling strategies, the method achieves state-of-the-art performance across three major benchmarks: QVHighlights, Charades-STA, and ActivityNet-Captions.
📝 Abstract
Recent advances in Multimodal Large Language Models (MLLMs) have improved image recognition and reasoning, but video-related tasks remain challenging due to memory constraints from dense frame processing. Existing Video Moment Retrieval (VMR) methodologies rely on sparse frame sampling, risking potential information loss, especially in lengthy videos. We propose SMORE (See MORE, store less), a framework that enhances memory efficiency while maintaining high information resolution. SMORE (1) uses query-guided captions to encode semantics aligned with user intent, (2) applies query-aware importance modulation to highlight relevant segments, and (3) adaptively compresses frames to preserve key content while reducing redundancy. This enables efficient video understanding without exceeding memory budgets. Experimental validation reveals that SMORE achieves state-of-the-art performance on QVHighlights, Charades-STA, and ActivityNet-Captions benchmarks.