🤖 AI Summary
Retrieving 3D objects from complex indoor scenes using only 2D mask images and natural language descriptions remains challenging due to missing 3D context, viewpoint distortion, mask noise, texture scarcity, and linguistic ambiguity.
Method: We propose a language–shape co-driven cross-modal retrieval framework. First, CLIP enables fine-grained image–text semantic alignment. Second, masks undergo preprocessing—extracting the largest connected component and denoising—followed by binary contour extraction to encode explicit shape priors. Third, a shape-guided re-ranking mechanism and robust majority-voting strategy jointly integrate linguistic understanding with geometric constraints.
Results: Evaluated on the private ROOMELSA benchmark, our method significantly improves open-world 3D object retrieval accuracy. Ablation studies confirm that explicitly incorporating shape priors effectively mitigates the limitations of weak language-only supervision, establishing shape awareness as a critical factor for robust cross-modal 3D retrieval.
📝 Abstract
Retrieving 3D objects in complex indoor environments using only a masked 2D image and a natural language description presents significant challenges. The ROOMELSA challenge limits access to full 3D scene context, complicating reasoning about object appearance, geometry, and semantics. These challenges are intensified by distorted viewpoints, textureless masked regions, ambiguous language prompts, and noisy segmentation masks. To address this, we propose SAMURAI: Shape-Aware Multimodal Retrieval for 3D Object Identification. SAMURAI integrates CLIP-based semantic matching with shape-guided re-ranking derived from binary silhouettes of masked regions, alongside a robust majority voting strategy. A dedicated preprocessing pipeline enhances mask quality by extracting the largest connected component and removing background noise. Our hybrid retrieval framework leverages both language and shape cues, achieving competitive performance on the ROOMELSA private test set. These results highlight the importance of combining shape priors with language understanding for robust open-world 3D object retrieval.