🤖 AI Summary
This work addresses key challenges in long-form video understanding—namely, the high memory overhead and limited context length of multimodal large language models (MLLMs), as well as difficulties in modeling long-range inter-segment dependencies and enabling early termination once sufficient evidence is gathered. To overcome these limitations, the authors propose AdaptToken, a training-free adaptive inference framework that leverages MLLM response entropy as a global control signal to dynamically allocate token budgets. It incorporates cross-modal attention to prioritize tokens within video segments and supports early stopping via an efficient variant, AdaptToken-Lite. Evaluated on four long-video benchmarks, AdaptToken significantly improves accuracy (e.g., +6.7% average gain with Qwen2.5-VL 7B) while scaling to inputs of up to tens of thousands of frames; AdaptToken-Lite reduces inference time by approximately 50% with negligible performance degradation.
📝 Abstract
Long video understanding remains challenging for Multi-modal Large Language Models (MLLMs) due to high memory costs and context-length limits. Prior approaches mitigate this by scoring and selecting frames/tokens within short clips, but they lack a principled mechanism to (i) compare relevance across distant video clips and (ii) stop processing once sufficient evidence has been gathered. We propose AdaptToken, a training-free framework that turns an MLLM's self-uncertainty into a global control signal for long-video token selection. AdaptToken splits a video into groups, extracts cross-modal attention to rank tokens within each group, and uses the model's response entropy to estimate each group's prompt relevance. This entropy signal enables a global token budget allocation across groups and further supports early stopping (AdaptToken-Lite), skipping the remaining groups when the model becomes sufficiently certain. Across four long-video benchmarks (VideoMME, LongVideoBench, LVBench, and MLVU) and multiple base MLLMs (7B-72B), AdaptToken consistently improves accuracy (e.g., +6.7 on average over Qwen2.5-VL 7B) and continues to benefit from extremely long inputs (up to 10K frames), while AdaptToken-Lite reduces inference time by about half with comparable performance. Project page: https://haozheqi.github.io/adapt-token