π€ AI Summary
To address the prohibitive computational cost of processing long videos with multimodal large language models (MLLMs) and the loss of critical visual information incurred by existing compression methods (e.g., average pooling), this paper proposes an instruction-guided hybrid-level video token compression framework. Our method introduces a localβglobal dual-level compression architecture: the local level preserves fine-grained details via grouped visual token attention, while the global level captures high-level semantic structure through learnable, instruction-conditioned token modeling. We further design a novel instruction-conditioned pretraining paradigm and introduce the HICom-248K dataset to support training. Experiments demonstrate consistent improvements across three video multiple-choice QA benchmarks, achieving an average accuracy gain of +2.43% while reducing video tokens by 78.8%. To our knowledge, this is the first work to simultaneously achieve both high compression ratio and superior performance, significantly outperforming state-of-the-art approaches.
π Abstract
Recent Multi-modal Large Language Models (MLLMs) have been challenged by the computational overhead resulting from massive video frames, often alleviated through compression strategies. However, the visual content is not equally contributed to user instructions, existing strategies (eg, average pool) inevitably lead to the loss of potentially useful information. To tackle this, we propose the Hybrid-level Instruction Injection Strategy for Conditional Token Compression in MLLMs (HICom), utilizing the instruction as a condition to guide the compression from both local and global levels. This encourages the compression to retain the maximum amount of user-focused information while reducing visual tokens to minimize computational burden. Specifically, the instruction condition is injected into the grouped visual tokens at the local level and the learnable tokens at the global level, and we conduct the attention mechanism to complete the conditional compression. From the hybrid-level compression, the instruction-relevant visual parts are highlighted while the temporal-spatial structure is also preserved for easier understanding of LLMs. To further unleash the potential of HICom, we introduce a new conditional pre-training stage with our proposed dataset HICom-248K. Experiments show that our HICom can obtain distinguished video understanding ability with fewer tokens, increasing the performance by 2.43% average on three multiple-choice QA benchmarks and saving 78.8% tokens compared with the SOTA method. The code is available at https://github.com/lntzm/HICom.