ReplayCAD: Generative Diffusion Replay for Continual Anomaly Detection

📅 2025-05-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Continual Anomaly Detection (CAD) faces two key challenges: catastrophic forgetting and inaccurate segmentation of small anomalous regions. Existing replay methods struggle to simultaneously preserve pixel-level fidelity and retain historical knowledge. To address this, we propose a generative replay framework grounded in pre-trained diffusion models. Our approach introduces the first semantic-embedding–spatial-feature co-driven mechanism for compressing historical data within the conditional diffusion space, enabling high-fidelity, high-diversity pixel-level replay. Specifically, it integrates class-semantic embedding retrieval, spatial-feature-guided sampling, and generative data reconstruction. Evaluated on VisA and MVTec, our method improves segmentation performance by 11.5% and 8.1%, respectively, achieving state-of-the-art results on both classification and segmentation metrics. This work establishes the first diffusion-based replay paradigm for CAD that jointly leverages semantic and spatial priors.

Technology Category

Application Category

📝 Abstract
Continual Anomaly Detection (CAD) enables anomaly detection models in learning new classes while preserving knowledge of historical classes. CAD faces two key challenges: catastrophic forgetting and segmentation of small anomalous regions. Existing CAD methods store image distributions or patch features to mitigate catastrophic forgetting, but they fail to preserve pixel-level detailed features for accurate segmentation. To overcome this limitation, we propose ReplayCAD, a novel diffusion-driven generative replay framework that replay high-quality historical data, thus effectively preserving pixel-level detailed features. Specifically, we compress historical data by searching for a class semantic embedding in the conditional space of the pre-trained diffusion model, which can guide the model to replay data with fine-grained pixel details, thus improving the segmentation performance. However, relying solely on semantic features results in limited spatial diversity. Hence, we further use spatial features to guide data compression, achieving precise control of sample space, thereby generating more diverse data. Our method achieves state-of-the-art performance in both classification and segmentation, with notable improvements in segmentation: 11.5% on VisA and 8.1% on MVTec. Our source code is available at https://github.com/HULEI7/ReplayCAD.
Problem

Research questions and friction points this paper is trying to address.

Addresses catastrophic forgetting in continual anomaly detection
Improves segmentation of small anomalous regions
Enhances pixel-level feature preservation for accurate detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion-driven generative replay framework
Class semantic embedding for data compression
Spatial features enhance sample diversity
🔎 Similar Papers
No similar papers found.
L
Lei Hu
South China University of Technology
Z
Zhiyong Gan
China United Network Communications Corporation Limited Guangdong Branch
L
Ling Deng
China United Network Communications Corporation Limited Guangdong Branch
J
Jinglin Liang
South China University of Technology
Lingyu Liang
Lingyu Liang
South China University of Technology
Computer VisionMachine Learning
Shuangping Huang
Shuangping Huang
Professor, Electronic and Information Engineering, South China University of Technology
Computer VisionAIGCLLMEmbodied AI
T
Tianshui Chen
Guangdong University of Technology