🤖 AI Summary
This work addresses the overlooked vulnerability of large vision-language models (VLMs) under visual token compression, which can obscure their adversarial fragility and lead to overly optimistic robustness evaluations when compression is ignored in attack design. To remedy this, we propose Compression-Aligned Gradient-based Exploitation (CAGE), a black-box attack framework that explicitly aligns perturbation optimization with the compressed inference pipeline—without requiring knowledge of the specific compression mechanism or token budget. CAGE introduces a novel alignment strategy based on expected feature disruption and rank distortion, revealing for the first time the robustness assessment bias induced by token compression. Extensive experiments across diverse compression methods and datasets demonstrate that CAGE substantially reduces model robust accuracy, underscoring the critical need for compression-aware security evaluation in VLMs.
📝 Abstract
Visual token compression is widely used to accelerate large vision-language models (LVLMs) by pruning or merging visual tokens, yet its adversarial robustness remains unexplored. We show that existing encoder-based attacks can substantially overestimate the robustness of compressed LVLMs, due to an optimization-inference mismatch: perturbations are optimized on the full-token representation, while inference is performed through a token-compression bottleneck. To address this gap, we propose the Compression-AliGnEd attack (CAGE), which aligns perturbation optimization with compression inference without assuming access to the deployed compression mechanism or its token budget. CAGE combines (i) expected feature disruption, which concentrates distortion on tokens likely to survive across plausible budgets, and (ii) rank distortion alignment, which actively aligns token distortions with rank scores to promote the retention of highly distorted evidence. Across diverse representative plug-and-play compression mechanisms and datasets, our results show that CAGE consistently achieves lower robust accuracy than the baseline. This work highlights that robustness assessments ignoring compression can be overly optimistic, calling for compression-aware security evaluation and defenses for efficient LVLMs.