🤖 AI Summary
Large Vision-Language Models (LVLMs) achieve accurate localization of salient objects in images but suffer from rapid attention decay across layers, hindering their ability to reason about object relationships and fine-grained attributes.
Method: We propose Cross-layer Visual Smoothing (CVS), the first approach introducing an updateable visual memory mechanism that sustains focused attention on key objects throughout multiple Transformer layers. CVS initializes memory with position-agnostic attention, jointly optimizes attention distributions and memory states layer-wise, and dynamically terminates smoothing via uncertainty estimation for perception-complete adaptive control.
Contribution/Results: CVS consistently improves relational reasoning and fine-grained attribute recognition across three mainstream LVLMs and four benchmarks, achieving state-of-the-art performance. Extensive experiments demonstrate its effectiveness, robustness, and model-agnostic generalizability.
📝 Abstract
Large Vision-Language Models (LVLMs) can accurately locate key objects in images, yet their attention to these objects tends to be very brief. Motivated by the hypothesis that sustained focus on key objects can improve LVLMs' visual capabilities, we propose Cross-Layer Vision Smoothing (CLVS). The core idea of CLVS is to incorporate a vision memory that smooths the attention distribution across layers. Specifically, we initialize this vision memory with position-unbiased visual attention in the first layer. In subsequent layers, the model's visual attention jointly considers the vision memory from previous layers, while the memory is updated iteratively, thereby maintaining smooth attention on key objects. Given that visual understanding primarily occurs in the early and middle layers of the model, we use uncertainty as an indicator of completed visual understanding and terminate the smoothing process accordingly. Experiments on four benchmarks across three LVLMs confirm the effectiveness and generalizability of our method. CLVS achieves state-of-the-art performance on a variety of visual understanding tasks, with particularly significant improvements in relation and attribute understanding.