Randomized Smoothing Meets Vision-Language Models

📅 2025-09-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the fundamental challenge that randomized smoothing (RS) struggles to adapt to generative models—particularly vision-language models (VLMs). We propose the first RS-based robustness certification framework for sequence outputs. Our core method constructs an equivalence mapping from generated sequences to semantic clusters or classifiable actions, thereby reducing sequence-level robustness certification to standard classification robustness. Building on this, we derive a theoretical robust radius with explicit error bounds and prove, under mild assumptions, that sample complexity follows a high-order scaling law. The approach integrates randomized smoothing, an oracle classifier, and semantic clustering. To our knowledge, it is the first method enabling efficient, verifiable robustness certification for state-of-the-art VLMs. Empirically, it significantly enhances robustness against adversarial attacks such as jailbreak, while maintaining theoretical rigor and practical applicability.

Technology Category

Application Category

📝 Abstract
Randomized smoothing (RS) is one of the prominent techniques to ensure the correctness of machine learning models, where point-wise robustness certificates can be derived analytically. While RS is well understood for classification, its application to generative models is unclear, since their outputs are sequences rather than labels. We resolve this by connecting generative outputs to an oracle classification task and showing that RS can still be enabled: the final response can be classified as a discrete action (e.g., service-robot commands in VLAs), as harmful vs. harmless (content moderation or toxicity detection in VLMs), or even applying oracles to cluster answers into semantically equivalent ones. Provided that the error rate for the oracle classifier comparison is bounded, we develop the theory that associates the number of samples with the corresponding robustness radius. We further derive improved scaling laws analytically relating the certified radius and accuracy to the number of samples, showing that the earlier result of 2 to 3 orders of magnitude fewer samples sufficing with minimal loss remains valid even under weaker assumptions. Together, these advances make robustness certification both well-defined and computationally feasible for state-of-the-art VLMs, as validated against recent jailbreak-style adversarial attacks.
Problem

Research questions and friction points this paper is trying to address.

Extending randomized smoothing to generative models with sequential outputs
Providing robustness certificates for vision-language models via oracle classification
Deriving sample-efficient scaling laws for certified radius and accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Connects generative outputs to oracle classification
Develops theory linking sample count to robustness
Derives improved scaling laws for certification
🔎 Similar Papers