π€ AI Summary
Zero-shot 3D anomaly detection aims to identify defects in 3D objects without labeled anomalous samples, yet existing approaches are largely confined to unimodal point clouds and suffer from limited semantic representation capability. This paper proposes a multimodal collaborative framework for zero-shot 3D anomaly detection, jointly leveraging point clouds, RGB images, and textual priors. We introduce multimodal prompt learning and a collaborative modulation mechanism to achieve cross-modal semantic disentanglement and complementary feature fusion. Key innovations include: (i) object-agnostic disentangled text prompts, (ii) an RGBβpoint cloud dual-guided modulation network, and (iii) a multimodal contrastive loss. Our method achieves significant performance gains over state-of-the-art unimodal and multimodal baselines across multiple benchmarks, demonstrating that multimodal semantic collaboration critically enhances zero-shot generalization capability.
π Abstract
Zero-shot 3D (ZS-3D) anomaly detection aims to identify defects in 3D objects without relying on labeled training data, making it especially valuable in scenarios constrained by data scarcity, privacy, or high annotation cost. However, most existing methods focus exclusively on point clouds, neglecting the rich semantic cues available from complementary modalities such as RGB images and texts priors. This paper introduces MCL-AD, a novel framework that leverages multimodal collaboration learning across point clouds, RGB images, and texts semantics to achieve superior zero-shot 3D anomaly detection. Specifically, we propose a Multimodal Prompt Learning Mechanism (MPLM) that enhances the intra-modal representation capability and inter-modal collaborative learning by introducing an object-agnostic decoupled text prompt and a multimodal contrastive loss. In addition, a collaborative modulation mechanism (CMM) is proposed to fully leverage the complementary representations of point clouds and RGB images by jointly modulating the RGB image-guided and point cloud-guided branches. Extensive experiments demonstrate that the proposed MCL-AD framework achieves state-of-the-art performance in ZS-3D anomaly detection.