๐ค AI Summary
To address the challenges of limited context windows and excessive computational overhead when multimodal large language models (MLLMs) process high-resolution images and videos, this paper proposes the first LLM-native visual compression paradigm. Unlike prior approaches relying on external modules, our method leverages the intrinsic understanding mechanisms of LLMs by introducing learnable Vision Compression Tokens (VCTs) and integrating attention distillation for autonomous, efficient visual token compression. The framework encompasses vision-instruction fine-tuning, token sparsification, and temporally continuous training, enabling long-range video modeling. Experiments demonstrate a 576ร reduction in visual tokens, a 94.8% decrease in inference FLOPs, and a 69.6% increase in throughputโwhile maintaining near-full-resolution performance on video question answering benchmarks, with negligible accuracy degradation.
๐ Abstract
Vision-Language Models (VLMs) have achieved remarkable success in various multi-modal tasks, but they are often bottlenecked by the limited context window and high computational cost of processing high-resolution image inputs and videos. Vision compression can alleviate this problem by reducing the vision token count. Previous approaches compress vision tokens with external modules and force LLMs to understand the compressed ones, leading to visual information loss. However, the LLMs' understanding paradigm of vision tokens is not fully utilised in the compression learning process. We propose VoCo-LLaMA, the first approach to compress vision tokens using LLMs. By introducing Vision Compression tokens during the vision instruction tuning phase and leveraging attention distillation, our method distill how LLMs comprehend vision tokens into their processing of VoCo tokens. VoCo-LLaMA facilitates effective vision compression and improves the computational efficiency during the inference stage. Specifically, our method achieves minimal performance loss with a compression ratio of 576$ imes$, resulting in up to 94.8$%$ fewer FLOPs and 69.6$%$ acceleration in inference time. Furthermore, through continuous training using time-series compressed token sequences of video frames, VoCo-LLaMA demonstrates the ability to understand temporal correlations, outperforming previous methods on popular video question-answering benchmarks. Our approach presents a promising way to unlock the full potential of VLMs' contextual window, enabling more scalable multi-modal applications. The project page, along with the associated code, can be accessed via https://yxxxb.github.io/VoCo-LLaMA-page/.