🤖 AI Summary
Existing visual feature fusion methods typically perform static aggregation after encoding, which precludes intervention in the representation formation process, often resulting in the loss of fine-grained details and a mismatch between shallow visual features and the semantic distribution of the language model. To address this, this work proposes a cross-layer memory modulation framework that introduces a recurrently updated memory state within the visual encoder to model inter-layer dependencies. It further designs a layer-wise feedback modulation mechanism that dynamically refreshes token representations at each layer, guiding their semantic evolution. This approach is the first to incorporate explicit control over representational evolution into multimodal fusion, enabling efficient cross-layer information integration and semantic alignment without fine-tuning the language model. Experiments demonstrate significant performance gains on multiple visual question answering and hallucination evaluation benchmarks, all while preserving the original visual token count, encoder architecture, and language model parameters.
📝 Abstract
Recent multimodal large language models (MLLMs) widely adopt multi-layer visual feature fusion to enhance visual representation. However, existing approaches typically perform static concatenation or weighted aggregation after visual encoding, without intervening in the representation formation process itself. As a result, fine-grained details from early layers may be progressively suppressed during hierarchical abstraction. Moreover, directly introducing shallow-layer features into the language model often leads to semantic distribution mismatch with the visual feature space that the LLM's cross-attention layers were pretrained on, which typically requires additional adaptation or fine-tuning of the LLM. To address these limitations, we revisit visual representation learning from the perspective of representation evolution control and propose a cross-layer memory-modulated vision framework(SCVM). Specifically, we introduce a recursively updated cross-layer memory state inside the vision encoder to model long-range inter-layer dependencies. We further design a layer-wise feedback modulation mechanism that refreshes token representations at each layer based on the accumulated memory, thereby structurally regulating the representation evolution trajectory. In addition, we incorporate an auxiliary semantic alignment objective that explicitly supervises the final memory state, encouraging progressive compression and reinforcement of task-relevant information. Experimental results on multiple visual question answering and hallucination evaluation benchmarks demonstrate that SCVM achieves consistent performance improvements without expanding visual tokens, introducing additional vision encoders, or modifying or fine-tuning the language model.