🤖 AI Summary
Existing vision-language models (VLMs) face limitations in complex multimodal and multilingual reasoning tasks due to constrained context length and poor memory efficiency; conventional external memory approaches—relying on concatenating long sequences—often degrade performance. To address this, we propose the first general-purpose continuous memory system tailored for VLMs: it unifies multimodal and multilingual world knowledge into compact 8-dimensional dense embeddings; introduces a VLM-specific self-supervised continuous memory encoding mechanism, requiring fine-tuning of only 1.2% of parameters; and adopts a plug-and-play modular architecture with a frozen backbone, lightweight LoRA-based adaptation, and zero parameter updates to the main model. Evaluated on eight challenging multimodal reasoning benchmarks, our method achieves significant performance gains while ensuring both data and parameter efficiency—memory modules are fully interchangeable, and the core VLM remains entirely frozen throughout inference and training.
📝 Abstract
Language models (LMs) and their extension, vision-language models (VLMs), have achieved remarkable performance across various tasks. However, they still struggle with complex reasoning tasks that require multimodal or multilingual real-world knowledge. To support such capabilities, an external memory system that can efficiently provide relevant multimodal information is essential. Existing approaches generally concatenate image and text tokens into a long sequence as memory, which, however, may drastically increase context length and even degrade performance. In contrast, we propose using continuous memory, a compact set of dense embeddings to more effectively and efficiently represent multimodal and multilingual knowledge. Our key insight is that a VLM can serve as its own continuous memory encoder. We empirically show that this design improves performance on complex multimodal reasoning tasks. Building on this, we introduce a data-efficient and parameter-efficient method to fine-tune the VLM into a memory encoder, requiring only 1.2% of the model's parameters and a small corpus of 15.6K self-synthesized samples. Our approach CoMEM utilizes VLM's original capabilities to encode arbitrary multimodal and multilingual knowledge into just 8 continuous embeddings. Since the inference-time VLM remains frozen, our memory module is plug-and-play and can be flexibly integrated as needed. Extensive experiments across eight multimodal reasoning benchmarks demonstrate the effectiveness of our approach.