Reasoning-Driven Anomaly Detection and Localization with Image-Level Supervision

📅 2026-03-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of precise pixel-level anomaly localization using only image-level labels, a setting where existing methods often struggle without external modules or dense annotations. We propose the first approach that harnesses the intrinsic reasoning capabilities of multimodal large language models (MLLMs) to jointly perform anomaly detection, localization, and interpretable reasoning—without requiring additional components or pixel-level supervision. Our method extracts anomaly-relevant tokens through autoregressive reasoning and aggregates their visual attention maps to generate anomaly heatmaps. Furthermore, we introduce a Consistency-Guided Reasoning Optimization (CGRO) mechanism that aligns linguistic reasoning with visual attention via reinforcement learning. Evaluated on four public benchmarks, our approach significantly advances performance in detection, localization, and interpretability, achieving results comparable to pixel-supervised methods using only image-level labels.
📝 Abstract
Multimodal large language models (MLLMs) have recently demonstrated remarkable reasoning and perceptual abilities for anomaly detection. However, most approaches remain confined to image-level anomaly detection and textual reasoning, while pixel-level localization still relies on external vision modules and dense annotations. In this work, we activate the intrinsic reasoning potential of MLLMs to perform anomaly detection, pixel-level localization, and interpretable reasoning solely from image-level supervision, without any auxiliary components or pixel-wise labels. Specifically, we propose Reasoning-Driven Anomaly Localization (ReAL), which extracts anomaly-related tokens from the autoregressive reasoning process and aggregates their attention responses to produce pixel-level anomaly maps. We further introduce a Consistency-Guided Reasoning Optimization (CGRO) module that leverages reinforcement learning to align reasoning tokens with visual attentions, resulting in more coherent reasoning and accurate anomaly localization. Extensive experiments on four public benchmarks demonstrate that our method significantly improves anomaly detection, localization, and interpretability. Remarkably, despite relying solely on image-level supervision, our approach achieves performance competitive with MLLM-based methods trained under dense pixel-level supervision. Code is available at https://github.com/YizhouJin313/ReADL.
Problem

Research questions and friction points this paper is trying to address.

anomaly detection
pixel-level localization
image-level supervision
multimodal large language models
interpretable reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

reasoning-driven
anomaly localization
image-level supervision
multimodal large language models
attention aggregation
Y
Yizhou Jin
State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China; Hangzhou Innovation Institute, Beihang University, Hangzhou, China
Y
Yuezhu Feng
State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China
Jinjin Zhang
Jinjin Zhang
Beihang University
P
Peng Wang
State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China
Qingjie Liu
Qingjie Liu
Professor, School of Computer Science and Engineering, Beihang University
Computer Vision and Pattern Recognition
Yunhong Wang
Yunhong Wang
Professor, School of Computer Science and Engineering, Beihang University
BiometricsPattern RecognitionImage ProcessingComputer Vision