BridgeNet: A Unified Multimodal Framework for Bridging 2D and 3D Industrial Anomaly Detection

📅 2025-07-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In industrial anomaly detection, 2D image-based methods struggle to identify 3D depth anomalies, and multi-modal anomalous samples are severely scarce. To address these challenges, this paper proposes a decoupled multi-modal anomaly detection framework. Methodologically, it explicitly disentangles RGB appearance and visible-depth information—extracted from point clouds—for the first time; introduces multi-scale Gaussian modeling and a unified texture anomaly generator to jointly synthesize rich, semantically consistent anomalies across both modalities; and employs a parameter-sharing mechanism to bridge 2D and 3D representations, circumventing complex cross-modal fusion. Evaluated on MVTec-3D AD and Eyecandies, our approach significantly outperforms state-of-the-art methods, demonstrating the effectiveness of cross-modal disentanglement and generative augmentation. The source code is publicly available.

Technology Category

Application Category

📝 Abstract
Industrial anomaly detection for 2D objects has gained significant attention and achieved progress in anomaly detection (AD) methods. However, identifying 3D depth anomalies using only 2D information is insufficient. Despite explicitly fusing depth information into RGB images or using point cloud backbone networks to extract depth features, both approaches struggle to adequately represent 3D information in multimodal scenarios due to the disparities among different modal information. Additionally, due to the scarcity of abnormal samples in industrial data, especially in multimodal scenarios, it is necessary to perform anomaly generation to simulate real-world abnormal samples. Therefore, we propose a novel unified multimodal anomaly detection framework to address these issues. Our contributions consist of 3 key aspects. (1) We extract visible depth information from 3D point cloud data simply and use 2D RGB images to represent appearance, which disentangles depth and appearance to support unified anomaly generation. (2) Benefiting from the flexible input representation, the proposed Multi-Scale Gaussian Anomaly Generator and Unified Texture Anomaly Generator can generate richer anomalies in RGB and depth. (3) All modules share parameters for both RGB and depth data, effectively bridging 2D and 3D anomaly detection. Subsequent modules can directly leverage features from both modalities without complex fusion. Experiments show our method outperforms state-of-the-art (SOTA) on MVTec-3D AD and Eyecandies datasets. Code available at: https://github.com/Xantastic/BridgeNet
Problem

Research questions and friction points this paper is trying to address.

Bridging 2D and 3D industrial anomaly detection gaps
Overcoming insufficient 3D depth anomaly identification
Addressing scarcity of abnormal industrial samples
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extracts depth from 3D point clouds
Generates anomalies using multi-scale Gaussian
Shares parameters for RGB and depth
🔎 Similar Papers
No similar papers found.
A
An Xiang
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences; University of Chinese Academy of Sciences
Z
Zixuan Huang
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences; University of Chinese Academy of Sciences
Xitong Gao
Xitong Gao
Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences
Efficient Training and InferenceAI Security and Privacy
Kejiang Ye
Kejiang Ye
Professor, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
Cloud ComputingAI SystemsIndustrial Internet
C
Cheng-zhong Xu
State Key Lab of IOTSC, Department of CIS, University of Macau