SGHA-Attack: Semantic-Guided Hierarchical Alignment for Transferable Targeted Attacks on Vision-Language Models

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the poor cross-model transferability of existing transfer-based targeted attacks on vision-language models, which often overfit to the embedding space of surrogate models. To mitigate this limitation, the authors propose a semantics-guided hierarchical alignment framework. By freezing a text-to-image generative model to construct a multi-reference image pool and incorporating a Top-K semantic anchor weighting mechanism, the method simultaneously aligns intermediate visual and textual features across multiple network layers. It further enforces cross-modal consistency constraints at both global and spatial granularities. Extensive experiments demonstrate that the proposed approach significantly improves the targeted transfer success rate of adversarial examples against both open-source and commercial black-box vision-language models, while maintaining strong robustness under various defenses such as preprocessing and purification.

Technology Category

Application Category

📝 Abstract
Large vision-language models (VLMs) are vulnerable to transfer-based adversarial perturbations, enabling attackers to optimize on surrogate models and manipulate black-box VLM outputs. Prior targeted transfer attacks often overfit surrogate-specific embedding space by relying on a single reference and emphasizing final-layer alignment, which underutilizes intermediate semantics and degrades transfer across heterogeneous VLMs. To address this, we propose SGHA-Attack, a Semantic-Guided Hierarchical Alignment framework that adopts multiple target references and enforces intermediate-layer consistency. Concretely, we generate a visually grounded reference pool by sampling a frozen text-to-image model conditioned on the target prompt, and then carefully select the Top-K most semantically relevant anchors under the surrogate to form a weighted mixture for stable optimization guidance. Building on these anchors, SGHA-Attack injects target semantics throughout the feature hierarchy by aligning intermediate visual representations at both global and spatial granularities across multiple depths, and by synchronizing intermediate visual and textual features in a shared latent subspace to provide early cross-modal supervision before the final projection. Extensive experiments on open-source and commercial black-box VLMs show that SGHA-Attack achieves stronger targeted transferability than prior methods and remains robust under preprocessing and purification defenses.
Problem

Research questions and friction points this paper is trying to address.

transferable targeted attacks
vision-language models
adversarial perturbations
intermediate semantics
black-box VLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Semantic-Guided Alignment
Hierarchical Feature Alignment
Transferable Adversarial Attack
Vision-Language Models
Intermediate-layer Consistency
🔎 Similar Papers
No similar papers found.