๐ค AI Summary
This work addresses the limitations of current large vision-language models, which, constrained by fixed visual token budgets and static cropping strategies, often lose fine-grained details and suffer from hallucination in complex reasoning tasks. The authors propose LASER, a training-free adaptive inference framework that dynamically reallocates attention based on task demands. By introducing a query-driven Visual Activation by Query (VAQ) metric and performing layer-wise sensitivity analysis, LASER reveals the dynamic dependency of different tasks on network depth, thereby challenging the prevailing โmagic layerโ assumption. Experiments demonstrate that this approach significantly improves performance across multiple visual question answering benchmarks, with particularly notable gains in complex visual reasoning scenarios.
๐ Abstract
Large Vision-Language Models (LVLMs) have advanced rapidly by aligning visual patches with the text embedding space, but a fixed visual-token budget forces images to be resized to a uniform pretraining resolution, often erasing fine-grained details and causing hallucinations via over-reliance on language priors. Recent attention-guided enhancement (e.g., cropping or region-focused attention allocation) alleviates this, yet it commonly hinges on a static"magic layer"empirically chosen on simple recognition benchmarks and thus may not transfer to complex reasoning tasks. In contrast to this static assumption, we propose a dynamic perspective on visual grounding. Through a layer-wise sensitivity analysis, we demonstrate that visual grounding is a dynamic process: while simple object recognition tasks rely on middle layers, complex visual search and reasoning tasks require visual information to be reactivated at deeper layers. Based on this observation, we introduce Visual Activation by Query (VAQ), a metric that identifies the layer whose attention map is most relevant to query-specific visual grounding by measuring attention sensitivity to the input query. Building on VAQ, we further propose LASER (Layer-adaptive Attention-guided Selective visual and decoding Enhancement for Reasoning), a training-free inference procedure that adaptively selects task-appropriate layers for visual localization and question answering. Experiments across diverse VQA benchmarks show that LASER significantly improves VQA accuracy across tasks with varying levels of complexity.