🤖 AI Summary
Existing vision-language model (VLM) resource-exhaustion attacks increase inference overhead but often produce semantically anomalous outputs, failing to balance effectiveness and stealth. This paper proposes Hidden Tail, a stealthy resource-exhaustion attack against VLMs. Its core innovation lies in generating prompt-agnostic adversarial images that induce models to output excessively long text—up to the maximum token limit—while embedding user-invisible special tokens at the output tail to suppress EOS token generation and extend sequence length, all while preserving semantic naturalness. The method employs a dynamically weighted composite loss function jointly optimizing semantic fidelity, repeated generation of the special token, and EOS suppression. Experiments demonstrate that Hidden Tail achieves output lengths 19.2× longer than baseline methods—significantly outperforming prior attacks—and is the first to simultaneously achieve high-intensity computational resource exhaustion and strong output-level stealth.
📝 Abstract
Vision-Language Models (VLMs) are increasingly deployed in real-world applications, but their high inference cost makes them vulnerable to resource consumption attacks. Prior attacks attempt to extend VLM output sequences by optimizing adversarial images, thereby increasing inference costs. However, these extended outputs often introduce irrelevant abnormal content, compromising attack stealthiness. This trade-off between effectiveness and stealthiness poses a major limitation for existing attacks. To address this challenge, we propose extit{Hidden Tail}, a stealthy resource consumption attack that crafts prompt-agnostic adversarial images, inducing VLMs to generate maximum-length outputs by appending special tokens invisible to users. Our method employs a composite loss function that balances semantic preservation, repetitive special token induction, and suppression of the end-of-sequence (EOS) token, optimized via a dynamic weighting strategy. Extensive experiments show that extit{Hidden Tail} outperforms existing attacks, increasing output length by up to 19.2$ imes$ and reaching the maximum token limit, while preserving attack stealthiness. These results highlight the urgent need to improve the robustness of VLMs against efficiency-oriented adversarial threats. Our code is available at https://github.com/zhangrui4041/Hidden_Tail.