Stateful Cross-layer Vision Modulation

📅 2026-02-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing visual feature fusion methods typically perform static aggregation after encoding, which precludes intervention in the representation formation process, often resulting in the loss of fine-grained details and a mismatch between shallow visual features and the semantic distribution of the language model. To address this, this work proposes a cross-layer memory modulation framework that introduces a recurrently updated memory state within the visual encoder to model inter-layer dependencies. It further designs a layer-wise feedback modulation mechanism that dynamically refreshes token representations at each layer, guiding their semantic evolution. This approach is the first to incorporate explicit control over representational evolution into multimodal fusion, enabling efficient cross-layer information integration and semantic alignment without fine-tuning the language model. Experiments demonstrate significant performance gains on multiple visual question answering and hallucination evaluation benchmarks, all while preserving the original visual token count, encoder architecture, and language model parameters.

Technology Category

Application Category

📝 Abstract
Recent multimodal large language models (MLLMs) widely adopt multi-layer visual feature fusion to enhance visual representation. However, existing approaches typically perform static concatenation or weighted aggregation after visual encoding, without intervening in the representation formation process itself. As a result, fine-grained details from early layers may be progressively suppressed during hierarchical abstraction. Moreover, directly introducing shallow-layer features into the language model often leads to semantic distribution mismatch with the visual feature space that the LLM's cross-attention layers were pretrained on, which typically requires additional adaptation or fine-tuning of the LLM. To address these limitations, we revisit visual representation learning from the perspective of representation evolution control and propose a cross-layer memory-modulated vision framework(SCVM). Specifically, we introduce a recursively updated cross-layer memory state inside the vision encoder to model long-range inter-layer dependencies. We further design a layer-wise feedback modulation mechanism that refreshes token representations at each layer based on the accumulated memory, thereby structurally regulating the representation evolution trajectory. In addition, we incorporate an auxiliary semantic alignment objective that explicitly supervises the final memory state, encouraging progressive compression and reinforcement of task-relevant information. Experimental results on multiple visual question answering and hallucination evaluation benchmarks demonstrate that SCVM achieves consistent performance improvements without expanding visual tokens, introducing additional vision encoders, or modifying or fine-tuning the language model.
Problem

Research questions and friction points this paper is trying to address.

multimodal large language models
visual representation learning
cross-layer feature fusion
semantic distribution mismatch
hierarchical abstraction
Innovation

Methods, ideas, or system contributions that make the work stand out.

cross-layer memory
representation evolution control
feedback modulation
semantic alignment
vision-language modeling
🔎 Similar Papers
No similar papers found.
Y
Ying Liu
Beijing Institute of Technology
Y
Yudong Han
Beijing Institute of Technology
K
Kean Shi
Peking University
Liyuan Pan
Liyuan Pan
Beijing Institute of Technology
Computer vision