🤖 AI Summary
This work addresses the limitation of handcrafted textual prompts in weakly supervised monocular 3D object detection, which fail to capture the visual diversity of individual instances within a scene and thereby hinder the learning of effective scene-aware representations. To overcome this, the authors propose a vision-guided probabilistic prompt learning paradigm that flexibly integrates with various weakly supervised frameworks. The approach pioneers the incorporation of visual uncertainty modeling into prompt learning by dynamically generating language prompts that reflect visual uncertainty through an Adaptive Prompt Bank (APB) and Multi-Gaussian Prompt Modeling (MGPM). Furthermore, it enhances cross-modal semantic consistency via vision-language embedding fusion and region-of-interest (RoI)-level contrastive matching. Evaluated on the KITTI benchmark, the method achieves up to a 4.8% improvement in average precision, significantly outperforming existing baselines.
📝 Abstract
Monocular 3D object detection typically relies on pseudo-labeling techniques to reduce dependency on real-world annotations. Recent advances demonstrate that deterministic linguistic cues can serve as effective auxiliary weak supervision signals, providing complementary semantic context. However, hand-crafted textual descriptions struggle to capture the inherent visual diversity of individuals across scenes, limiting the model's ability to learn scene-aware representations. To address this challenge, we propose Visual-referred Probabilistic Prompt Learning (VirPro), an adaptive multi-modal pretraining paradigm that can be seamlessly integrated into diverse weakly supervised monocular 3D detection frameworks. Specifically, we generate a diverse set of learnable, instance-conditioned prompts across scenes and store them in an Adaptive Prompt Bank (APB). Subsequently, we introduce Multi-Gaussian Prompt Modeling (MGPM), which incorporates scene-based visual features into the corresponding textual embeddings, allowing the text prompts to express visual uncertainties. Then, from the fused vision-language embeddings, we decode a prompt-targeted Gaussian, from which we derive a unified object-level prompt embedding for each instance. RoI-level contrastive matching is employed to enforce modality alignment, bringing embeddings of co-occurring objects within the same scene closer in the latent space, thus enhancing semantic coherence. Extensive experiments on the KITTI benchmark demonstrate that integrating our pretraining paradigm consistently yields substantial performance gains, achieving up to a 4.8% average precision improvement than the baseline.