🤖 AI Summary
This work addresses the high latency and memory overhead of vision-language models (VLMs) caused by processing a large number of visual tokens. Existing training-free token pruning methods struggle to preserve both local evidence and global context under aggressive compression. To overcome this limitation, we propose the Focus-Scan-Refine (FSR) framework, which, for the first time, incorporates principles from human visual perception into token pruning. FSR first focuses on visually salient and task-relevant regions, then scans for diverse contextual tokens, and finally refines the representation through similarity-weighted fusion. Without requiring any retraining, FSR consistently outperforms state-of-the-art pruning methods across multiple VLM backbones and vision-language benchmarks, achieving substantial token reduction while maintaining or even improving accuracy.
📝 Abstract
Vision-language models (VLMs) often generate massive visual tokens that greatly increase inference latency and memory footprint; while training-free token pruning offers a practical remedy, existing methods still struggle to balance local evidence and global context under aggressive compression. We propose Focus-Scan-Refine (FSR), a human-inspired, plug-and-play pruning framework that mimics how humans answer visual questions: focus on key evidence, then scan globally if needed, and refine the scanned context by aggregating relevant details. FSR first focuses on key evidence by combining visual importance with instruction relevance, avoiding the bias toward visually salient but query-irrelevant regions. It then scans for complementary context conditioned on the focused set, selecting tokens that are most different from the focused evidence. Finally, FSR refines the scanned context by aggregating nearby informative tokens into the scan anchors via similarity-based assignment and score-weighted merging, without increasing the token budget. Extensive experiments across multiple VLM backbones and vision-language benchmarks show that FSR consistently improves the accuracy-efficiency trade-off over existing state-of-the-art pruning methods. The source codes can be found at https://github.com/ILOT-code/FSR.