🤖 AI Summary
Existing text-to-3D shape retrieval methods are constrained by assumptions of canonical poses and limited object categories, struggling to handle real-world scenarios involving diverse categories and arbitrarily oriented 3D objects. This work proposes RI-Mamba, the first rotation-invariant state space model designed for point clouds. By decoupling global and local reference frames, RI-Mamba disentangles pose from geometry. It leverages Hilbert ordering to construct a geometrically structured yet rotation-invariant token sequence and incorporates directional embeddings with feature-wise linear modulation (FiLM) to recover spatial context. Notably, RI-Mamba is the first to integrate rotation invariance into the Mamba architecture, enabling cross-modal contrastive learning and large-scale unsupervised training while preserving linear computational complexity. Evaluated on the OmniObject3D benchmark covering over 200 categories, the method significantly improves retrieval performance, robustness, and generalization under arbitrary orientations.
📝 Abstract
3D assets have rapidly expanded in quantity and diversity due to the growing popularity of virtual reality and gaming. As a result, text-to-shape retrieval has become essential in facilitating intuitive search within large repositories. However, existing methods require canonical poses and support few object categories, limiting their real-world applicability where objects can belong to diverse classes and appear in random orientations. To address this challenge, we propose RI-Mamba, the first rotation-invariant state-space model for point clouds. RI-Mamba defines global and local reference frames to disentangle pose from geometry and uses Hilbert sorting to construct token sequences with meaningful geometric structure while maintaining rotation invariance. We further introduce a novel strategy to compute orientational embeddings and reintegrate them via feature-wise linear modulation, effectively recovering spatial context and enhancing model expressiveness. Our strategy is inherently compatible with state-space models and operates in linear time. To scale up retrieval, we adopt cross-modal contrastive learning with automated triplet generation, allowing training on diverse datasets without manual annotation. Extensive experiments demonstrate RI-Mamba's superior representational capacity and robustness, achieving state-of-the-art performance on the OmniObject3D benchmark across more than 200 object categories under arbitrary orientations. Our code will be made available at https://github.com/ndkhanh360/RI-Mamba.git.