Less Is More -- Until It Breaks: Security Pitfalls of Vision Token Compression in Large Vision-Language Models

📅 2026-01-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Although visual token compression enhances the inference efficiency of large vision-language models, it significantly undermines their robustness and introduces subtle security vulnerabilities. This work is the first to systematically reveal that this risk stems from the instability in token importance ranking during compression. To expose this vulnerability, the authors propose Compression-Aware Attacks (CAA) and a black-box transfer variant (Transfer CAA). Comprehensive evaluations across diverse models, datasets, and compression mechanisms demonstrate that compression consistently degrades model security, and existing defense strategies offer limited protection. These findings highlight a critical trade-off between computational efficiency and model robustness in compressed vision-language systems.

Technology Category

Application Category

📝 Abstract
Visual token compression is widely adopted to improve the inference efficiency of Large Vision-Language Models (LVLMs), enabling their deployment in latency-sensitive and resource-constrained scenarios. However, existing work has mainly focused on efficiency and performance, while the security implications of visual token compression remain largely unexplored. In this work, we first reveal that visual token compression substantially degrades the robustness of LVLMs: models that are robust under uncompressed inference become highly vulnerable once compression is enabled. These vulnerabilities are state-specific; failure modes emerge only in the compressed setting and completely disappear when compression is disabled, making them particularly hidden and difficult to diagnose. By analyzing the key stages of the compression process, we identify instability in token importance ranking as the primary cause of this robustness degradation. Small and imperceptible perturbations can significantly alter token rankings, leading the compression mechanism to mistakenly discard task-critical information and ultimately causing model failure. Motivated by this observation, we propose a Compression-Aware Attack to systematically study and exploit this vulnerability. CAA directly targets the token selection mechanism and induces failures exclusively under compressed inference. We further extend this approach to more realistic black-box settings and introduce Transfer CAA, where neither the target model nor the compression configuration is accessible. We further evaluate potential defenses and find that they provide only limited protection. Extensive experiments across models, datasets, and compression methods show that visual token compression significantly undermines robustness, revealing a previously overlooked efficiency-security trade-off.
Problem

Research questions and friction points this paper is trying to address.

visual token compression
robustness
security
large vision-language models
adversarial vulnerability
Innovation

Methods, ideas, or system contributions that make the work stand out.

visual token compression
robustness degradation
compression-aware attack
token importance ranking
large vision-language models
🔎 Similar Papers
No similar papers found.