Continual Visual Anomaly Detection on the Edge: Benchmark and Efficient Solutions

📅 2026-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of visual anomaly detection under the dual constraints of limited resources on edge devices and the need for continual learning. To this end, the authors establish the first comprehensive benchmark specifically designed for continual visual anomaly detection in edge scenarios and propose a lightweight, efficient approach named Tiny-Dinomaly, along with enhanced variants of PatchCore and PaDiM. By integrating lightweight backbone architectures, the DINO foundation model, continual learning mechanisms, and model compression techniques, the proposed methods substantially reduce resource consumption while improving detection performance—achieving a 13× reduction in memory usage, a 20× decrease in computational cost, and a 5-percentage-point gain in Pixel F1 score. This work pioneers the joint consideration of edge deployment and continual learning requirements, offering an effective paradigm for anomaly detection in resource-constrained environments.
📝 Abstract
Visual Anomaly Detection (VAD) is a critical task for many applications including industrial inspection and healthcare. While VAD has been extensively studied, two key challenges remain largely unaddressed in conjunction: edge deployment, where computational resources are severely constrained, and continual learning, where models must adapt to evolving data distributions without forgetting previously acquired knowledge. Our benchmark provides guidance for the selection of the optimal backbone and VAD method under joint efficiency and adaptability constraints, characterizing the trade-offs between memory footprint, inference cost, and detection performance. Studying these challenges in isolation is insufficient, as methods designed for one setting make assumptions that break down when the other constraint is simultaneously imposed. In this work, we propose the first comprehensive benchmark for VAD on the edge in the continual learning scenario, evaluating seven VAD models across three lightweight backbone architectures. Furthermore, we propose Tiny-Dinomaly, a lightweight adaptation of the Dinomaly model built on the DINO foundation model that achieves 13x smaller memory footprint and 20x lower computational cost while improving Pixel F1 by 5 percentage points. Finally, we introduce targeted modifications to PatchCore and PaDiM to improve their efficiency in the continual learning setting.
Problem

Research questions and friction points this paper is trying to address.

Continual Learning
Edge Deployment
Visual Anomaly Detection
Resource Constraints
Catastrophic Forgetting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Continual Learning
Edge Computing
Visual Anomaly Detection
Model Efficiency
Tiny-Dinomaly