Concept-based Adversarial Attack: a Probabilistic Perspective

πŸ“… 2025-06-30
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Conventional single-image adversarial attacks suffer from semantic singularity and lack conceptual consistency. Method: We propose the first concept-level probabilistic generative framework for adversarial attacks, modeling the target class as an implicit conceptual distribution. By sampling diverse concept instances from this latent space and applying targeted adversarial perturbations, our method generates highly diverse adversarial examples while preserving semantic integrityβ€”e.g., identity or class recognizability. Crucially, it integrates probabilistic generative modeling with gradient-based optimization to enable controllable, concept-aware attacks. Contribution/Results: Extensive experiments demonstrate significant improvements across multiple models and datasets: average attack success rate increases by +12.3%, and sample diversity improves markedly (FID decreases by 37.6%), all while maintaining strong semantic consistency. This work establishes a novel paradigm for interpretable, robust, and semantically grounded adversarial attack and defense research.

Technology Category

Application Category

πŸ“ Abstract
We propose a concept-based adversarial attack framework that extends beyond single-image perturbations by adopting a probabilistic perspective. Rather than modifying a single image, our method operates on an entire concept -- represented by a probabilistic generative model or a set of images -- to generate diverse adversarial examples. Preserving the concept is essential, as it ensures that the resulting adversarial images remain identifiable as instances of the original underlying category or identity. By sampling from this concept-based adversarial distribution, we generate images that maintain the original concept but vary in pose, viewpoint, or background, thereby misleading the classifier. Mathematically, this framework remains consistent with traditional adversarial attacks in a principled manner. Our theoretical and empirical results demonstrate that concept-based adversarial attacks yield more diverse adversarial examples and effectively preserve the underlying concept, while achieving higher attack efficiency.
Problem

Research questions and friction points this paper is trying to address.

Extends adversarial attacks beyond single-image perturbations
Generates diverse adversarial examples preserving original concepts
Ensures adversarial images remain identifiable as original category
Innovation

Methods, ideas, or system contributions that make the work stand out.

Concept-based adversarial attack framework
Probabilistic generative model sampling
Diverse adversarial examples generation
πŸ”Ž Similar Papers
No similar papers found.