π€ AI Summary
Existing monocular 3D object detection methods operate under closed-set assumptions, exhibiting poor generalization to unseen object categories and novel camera configurations. To address this limitation, we propose DetAny3Dβthe first framework enabling open-world zero-shot monocular 3D detection. Our approach pioneers the transfer of knowledge from 2D foundation models (SAM and CLIP) to 3D detection via a novel 2D aggregator and a zero-embedding mapping 3D interpreter, effectively mitigating catastrophic forgetting during cross-dimensional adaptation. We further integrate feature alignment, 3D geometric disentanglement modeling, and monocular depth priors for robust scene understanding. Evaluated on both unseen categories and new camera setups, DetAny3D achieves state-of-the-art performance, while also surpassing most prior methods on standard benchmarks. This significantly enhances generalization to rare or novel objects in real-world applications such as autonomous driving.
π Abstract
Despite the success of deep learning in close-set 3D object detection, existing approaches struggle with zero-shot generalization to novel objects and camera configurations. We introduce DetAny3D, a promptable 3D detection foundation model capable of detecting any novel object under arbitrary camera configurations using only monocular inputs. Training a foundation model for 3D detection is fundamentally constrained by the limited availability of annotated 3D data, which motivates DetAny3D to leverage the rich prior knowledge embedded in extensively pre-trained 2D foundation models to compensate for this scarcity. To effectively transfer 2D knowledge to 3D, DetAny3D incorporates two core modules: the 2D Aggregator, which aligns features from different 2D foundation models, and the 3D Interpreter with Zero-Embedding Mapping, which mitigates catastrophic forgetting in 2D-to-3D knowledge transfer. Experimental results validate the strong generalization of our DetAny3D, which not only achieves state-of-the-art performance on unseen categories and novel camera configurations, but also surpasses most competitors on in-domain data.DetAny3D sheds light on the potential of the 3D foundation model for diverse applications in real-world scenarios, e.g., rare object detection in autonomous driving, and demonstrates promise for further exploration of 3D-centric tasks in open-world settings. More visualization results can be found at DetAny3D project page.