🤖 AI Summary
To address two critical limitations in Large Vision-Language Models (LVLMs)—progressive visual information forgetting with context expansion and distortion of 2D spatial structure—this paper proposes a dual-path visual processing architecture: a Context path for real-time modeling of the current image and a Memory path for persistent storage of salient visual memories. We introduce RoPE-DHR, a novel positional encoding scheme that preserves 2D spatial awareness for high-resolution images via thumbnail-based aggregation while mitigating long-range attention decay. Additionally, we propose dynamic high-resolution feature alignment and multi-scale visual memory mechanisms. Evaluated across seven benchmarks spanning long-context understanding, multi-image reasoning, and visual question answering, our method consistently outperforms state-of-the-art LVLMs, achieving significant gains in mid-sequence visual content retention (+23.6%) and spatial relation modeling accuracy (+18.4%).
📝 Abstract
Recent advancements in Large Vision-Language Models built upon Large Language Models have established aligning visual features with LLM representations as the dominant paradigm. However, inherited LLM architectural designs introduce suboptimal characteristics for multimodal processing. First, LVLMs exhibit a bimodal distribution in attention allocation, leading to the progressive neglect of middle visual content as context expands. Second, conventional positional encoding schemes fail to preserve vital 2D structural relationships when processing dynamic high-resolution images. To address these limitations, we propose CoMemo - a dual-path architecture that combines a Context image path with an image Memory path for visual processing, effectively alleviating visual information neglect. Additionally, we introduce RoPE-DHR, a novel positional encoding mechanism that employs thumbnail-based positional aggregation to maintain 2D spatial awareness while mitigating remote decay in extended sequences. Evaluations across seven benchmarks,including long-context comprehension, multi-image reasoning, and visual question answering, demonstrate CoMemo's superior performance compared to conventional LVLM architectures. Project page is available at https://lalbj.github.io/projects/CoMemo/.