BlindSight: Harnessing Sparsity for Efficient VLMs

📅 2025-07-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large vision-language models (VLMs) suffer from prohibitively high prefill latency due to quadratically scaling attention computation over extended visual token sequences. This work proposes a training-free, plug-and-play inference optimization method. First, it systematically identifies and categorizes intrinsic sparse attention patterns inherent in VLMs. Then, it designs a prompt-agnostic, head-level sparsification strategy that integrates attention sinks with cross-image attention analysis to construct input-template-aware dynamic sparse attention masks. The approach requires no fine-tuning and generalizes across diverse VLM architectures. Evaluated on multi-image understanding benchmarks, it reduces computational FLOPs by 32%–41% while maintaining accuracy within ±2% of the dense baseline—demonstrating substantial gains in inference efficiency and practical deployability.

Technology Category

Application Category

📝 Abstract
Large vision-language models (VLMs) enable the joint processing of text and images. However, the inclusion of vision data significantly expands the prompt length. Along with the quadratic complexity of the attention computation, this results in a longer prefill duration. An approach to mitigate this bottleneck is to leverage the inherent sparsity in the attention computation. In our analysis of attention patterns in VLMs, we observe that a substantial portion of layers exhibit minimal cross-image attention, except through attention-sink tokens per image. These sparse attention patterns fall into distinct categories: sink-only, document mask and a hybrid document-sink mask. Based on this, we propose BlindSight: a training-free approach to optimize VLM inference using a input template-aware attention sparsity mask. We utilize samples from a dataset to derive a prompt-agnostic sparsity categorization for every attention head. We evaluate the proposed technique using VLMs such as Qwen2-VL, Qwen2.5-VL and Gemma-3. BlindSight results in a 32%-41% reduction in FLOPs on average with -2%-+2% accuracy compared to the original model in most evaluated multi-image understanding benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Reducing prefill duration in VLMs due to long vision prompts
Leveraging sparsity in attention computation to optimize VLM inference
Maintaining accuracy while reducing FLOPs in multi-image understanding tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages sparsity in attention computation
Uses template-aware attention sparsity mask
Reduces FLOPs by 32%-41%
🔎 Similar Papers
No similar papers found.
T
Tharun Adithya Srikrishnan
Advanced Micro Devices, Inc. (AMD)
D
Deval Shah
Advanced Micro Devices, Inc. (AMD)
Steven K. Reinhardt
Steven K. Reinhardt
AMD