🤖 AI Summary
Existing keyframe sampling methods struggle to efficiently capture query-relevant evidential cues in long-form video understanding due to limitations in context length and computational cost. This work proposes an evidence-driven keyframe sampling framework grounded in information bottleneck theory, formulating keyframe selection as the maximization of conditional mutual information between selected frames and the query. Through structural decomposition, this objective is transformed into a frame-wise independent scoring problem. To the best of our knowledge, this is the first approach to integrate the information bottleneck principle into query-guided video sampling. By combining a query-conditioned evidence scoring network with a contrastive learning objective, the method substantially improves both training and inference efficiency. Under strict token budgets, it achieves significant performance gains over existing sampling strategies across multiple long-video question-answering benchmarks.
📝 Abstract
Multimodal Large Language Models (MLLMs) have shown strong performance on video question answering, but their application to long-form videos is constrained by limited context length and computational cost, making keyframe sampling essential. Existing approaches typically rely on semantic relevance or reinforcement learning, which either fail to capture evidential clues or suffer from inefficient combinatorial optimization. In this work, we propose an evidence-driven keyframe sampling framework grounded in information bottleneck theory. We formulate keyframe selection as maximizing the conditional mutual information between selected frames and the query, providing a principled objective that reflects each frame's contribution to answering the question. To make this objective tractable, we exploit its structure to derive a decomposed optimization that reduces subset selection to independent frame-level scoring. We further introduce a query-conditioned evidence scoring network trained with a contrastive objective to estimate evidential importance efficiently. Experiments on long-form video understanding benchmarks show that our method consistently outperforms prior sampling strategies under strict token budgets, while significantly improving training efficiency.