🤖 AI Summary
To address the loss of fine-grained spatial information—critical for OCR, chart, and table understanding—caused by aggressive visual token compression in multimodal large language models (MLLMs), this paper proposes Visual Re-Memory (VR), a lightweight mechanism inserted between LLM decoder layers. VR dynamically restores essential visual context via text-guided multi-level feature resampling and saliency-enhanced local-window attention. It establishes the first cross-layer visual feature re-memory paradigm, overcoming the limitations of conventional unidirectional projector-based compression. By jointly optimizing computational efficiency and spatial modeling fidelity, VR achieves significant performance gains across multiple benchmarks: LLaVA-VR (2B) outperforms TokenPacker-HD-7B and DeepSeek-VL-7B on OCR, chart, and table understanding tasks while maintaining high inference efficiency.
📝 Abstract
In this work, we study the Efficient Multimodal Large Language Model. Redundant vision tokens consume a significant amount of computational memory and resources. Therefore, many previous works compress them in the Vision Projector to reduce the number of vision tokens. However, simply compressing in the Vision Projector can lead to the loss of visual information, especially for tasks that rely on fine-grained spatial relationships, such as OCR and Chart &Table Understanding. To address this problem, we propose Vision Remember, which is inserted between the LLM decoder layers to allow vision tokens to re-memorize vision features. Specifically, we retain multi-level vision features and resample them with the vision tokens that have interacted with the text token. During the resampling process, each vision token only attends to a local region in vision features, which is referred to as saliency-enhancing local attention. Saliency-enhancing local attention not only improves computational efficiency but also captures more fine-grained contextual information and spatial relationships within the region. Comprehensive experiments on multiple visual understanding benchmarks validate the effectiveness of our method when combined with various Efficient Vision Projectors, showing performance gains without sacrificing efficiency. Based on Vision Remember, LLaVA-VR with only 2B parameters is also superior to previous representative MLLMs such as Tokenpacker-HD-7B and DeepSeek-VL-7B.