🤖 AI Summary
To address the limited global contextual modeling and high false-negative rates in high-precision 3D anomaly detection—stemming from the inherent local receptive fields of point-based models—this paper proposes a lossless multi-view reconstruction framework. It first applies reversible geometric projection to convert high-resolution input point clouds into multiple calibrated 2D views, then establishes an image-level reconstruction pipeline. The projection, encoder, and decoder networks are jointly optimized to enhance global representation learning. Our key innovation lies in achieving geometry-preserving point-to-image conversion and leveraging multi-view consistency constraints to improve fine-grained anomaly localization. Evaluated on the Real3D-AD benchmark, the method achieves 89.6% (instance-level) and 95.7% (point-level) AU-ROC, substantially outperforming existing state-of-the-art approaches.
📝 Abstract
3D anomaly detection is critical in industrial quality inspection. While existing methods achieve notable progress, their performance degrades in high-precision 3D anomaly detection due to insufficient global information. To address this, we propose Multi-View Reconstruction (MVR), a method that losslessly converts high-resolution point clouds into multi-view images and employs a reconstruction-based anomaly detection framework to enhance global information learning. Extensive experiments demonstrate the effectiveness of MVR, achieving 89.6% object-wise AU-ROC and 95.7% point-wise AU-ROC on the Real3D-AD benchmark.