VLMInferSlow: Evaluating the Efficiency Robustness of Large Vision-Language Models as a Service

📅 2025-06-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large vision-language models (VLMs) deployed as ML-as-a-Service lack systematic evaluation of efficiency robustness under black-box API access—particularly regarding adversarial perturbations that inflate inference latency without compromising perceptual fidelity. Method: We propose the first efficiency-robustness benchmark tailored to real-world service environments, integrating fine-grained VLM inference efficiency modeling, zeroth-order optimization (ZOO)-driven black-box adversarial search, and API-level latency measurement with perturbation-constrained optimization—requiring no architectural knowledge or internal model access. Contribution/Results: Our method generates imperceptible adversarial images that increase computational inference overhead by up to 128.47% across mainstream VLMs, exposing critical efficiency vulnerabilities in production deployments. This work establishes the first benchmark and analytical framework for evaluating efficiency robustness of VLMs under black-box API conditions, advancing trustworthy AI service deployment.

Technology Category

Application Category

📝 Abstract
Vision-Language Models (VLMs) have demonstrated great potential in real-world applications. While existing research primarily focuses on improving their accuracy, the efficiency remains underexplored. Given the real-time demands of many applications and the high inference overhead of VLMs, efficiency robustness is a critical issue. However, previous studies evaluate efficiency robustness under unrealistic assumptions, requiring access to the model architecture and parameters -- an impractical scenario in ML-as-a-service settings, where VLMs are deployed via inference APIs. To address this gap, we propose VLMInferSlow, a novel approach for evaluating VLM efficiency robustness in a realistic black-box setting. VLMInferSlow incorporates fine-grained efficiency modeling tailored to VLM inference and leverages zero-order optimization to search for adversarial examples. Experimental results show that VLMInferSlow generates adversarial images with imperceptible perturbations, increasing the computational cost by up to 128.47%. We hope this research raises the community's awareness about the efficiency robustness of VLMs.
Problem

Research questions and friction points this paper is trying to address.

Evaluating efficiency robustness of Vision-Language Models (VLMs) in real-world applications
Addressing lack of black-box evaluation methods for VLM inference efficiency
Proposing adversarial example generation to test computational cost vulnerabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Black-box VLM efficiency robustness evaluation
Fine-grained efficiency modeling for VLMs
Zero-order optimization for adversarial examples
🔎 Similar Papers
No similar papers found.
X
Xiasi Wang
The Hong Kong University of Science and Technology
T
Tianliang Yao
Tongji University
Simin Chen
Simin Chen
Columbia University
Software EngineeringMachine Learning
Runqi Wang
Runqi Wang
Beijing Jiaotong University
Few-Shot LearningContinual LearningMuti-Modal
L
Lei Ye
Huawei
Kuofeng Gao
Kuofeng Gao
Tsinghua University
Large Language ModelTrustworthy AIBackdoor Learning
Y
Yi Huang
University of Chinese Academy of Sciences
Y
Yuan Yao
The Hong Kong University of Science and Technology