Improving Sustainability of Adversarial Examples in Class-Incremental Learning

📅 2025-11-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In class-incremental learning (CIL), adversarial examples (AEs) rapidly degrade due to domain shift induced by dynamic model updates. To address this, we propose a Semantic-Sustainable Adversarial Example Generation framework. Our method tackles the problem by leveraging vision-language models to extract cross-task semantic representations, dynamically calibrating AE perturbation directions via real-time incremental model feedback, and enforcing semantic consistency through latent-space filtering and enhancement to stabilize semantic evolution. Integrating adversarial optimization, multimodal alignment, and CIL-aware modeling, our approach significantly improves cross-domain robustness of AEs under continual model adaptation. Evaluated on a challenging CIL setting where the number of classes increases ninefold, our method achieves an average 31.28% improvement in AE effectiveness over state-of-the-art baselines, demonstrating superior sustainability and generalizability.

Technology Category

Application Category

📝 Abstract
Current adversarial examples (AEs) are typically designed for static models. However, with the wide application of Class-Incremental Learning (CIL), models are no longer static and need to be updated with new data distributed and labeled differently from the old ones. As a result, existing AEs often fail after CIL updates due to significant domain drift. In this paper, we propose SAE to enhance the sustainability of AEs against CIL. The core idea of SAE is to enhance the robustness of AE semantics against domain drift by making them more similar to the target class while distinguishing them from all other classes. Achieving this is challenging, as relying solely on the initial CIL model to optimize AE semantics often leads to overfitting. To resolve the problem, we propose a Semantic Correction Module. This module encourages the AE semantics to be generalized, based on a visual-language model capable of producing universal semantics. Additionally, it incorporates the CIL model to correct the optimization direction of the AE semantics, guiding them closer to the target class. To further reduce fluctuations in AE semantics, we propose a Filtering-and-Augmentation Module, which first identifies non-target examples with target-class semantics in the latent space and then augments them to foster more stable semantics. Comprehensive experiments demonstrate that SAE outperforms baselines by an average of 31.28% when updated with a 9-fold increase in the number of classes.
Problem

Research questions and friction points this paper is trying to address.

Enhancing adversarial example sustainability against class-incremental learning updates
Addressing domain drift-induced failure of adversarial examples in evolving models
Optimizing adversarial semantics to resist degradation during model updates
Innovation

Methods, ideas, or system contributions that make the work stand out.

Enhancing adversarial example robustness against domain drift
Using visual-language model for generalized semantic correction
Filtering and augmenting examples to stabilize semantics
🔎 Similar Papers
No similar papers found.
T
Taifeng Liu
School of Cyber Engineering, Xidian University, China
X
Xinjing Liu
School of Cyber Engineering, Xidian University, China
L
Liangqiu Dong
School of Cyber Engineering, Xidian University, China
Y
Yang Liu
School of Cyber Engineering, Xidian University, China
Yilong Yang
Yilong Yang
Beihang University
Software EngineeringArtificial Intelligence
Z
Zhuo Ma
School of Cyber Engineering, Xidian University, China