π€ AI Summary
Existing 3D asset retrieval methods suffer from inconsistent modeling of spatial-semantic-style constraints and the absence of a dedicated retrieval paradigm. This paper proposes a trimodal joint retrieval framework tailored for metaverse scene generation, supporting arbitrary query combinations of text, image, and 3D inputs. Our key contributions are: (1) an equivariant scene layout encoder (ESSGNN) that ensures coordinate transformation invariance while enabling context-aware iterative scene construction; and (2) unified trimodal feature fusion with joint learning of object appearance and scene layout, thereby harmonizing spatial compatibility and stylistic consistency. Extensive experiments on large-scale 3D asset repositories demonstrate significant improvements in cross-modal retrieval accuracy, layout plausibility, and stylistic coherence. The framework enables efficient, controllable, and semantically grounded scene generation for metaverse applications.
π Abstract
We present MetaFind, a scene-aware tri-modal compositional retrieval framework designed to enhance scene generation in the metaverse by retrieving 3D assets from large-scale repositories. MetaFind addresses two core challenges: (i) inconsistent asset retrieval that overlooks spatial, semantic, and stylistic constraints, and (ii) the absence of a standardized retrieval paradigm specifically tailored for 3D asset retrieval, as existing approaches mainly rely on general-purpose 3D shape representation models. Our key innovation is a flexible retrieval mechanism that supports arbitrary combinations of text, image, and 3D modalities as queries, enhancing spatial reasoning and style consistency by jointly modeling object-level features (including appearance) and scene-level layout structures. Methodologically, MetaFind introduces a plug-and-play equivariant layout encoder ESSGNN that captures spatial relationships and object appearance features, ensuring retrieved 3D assets are contextually and stylistically coherent with the existing scene, regardless of coordinate frame transformations. The framework supports iterative scene construction by continuously adapting retrieval results to current scene updates. Empirical evaluations demonstrate the improved spatial and stylistic consistency of MetaFind in various retrieval tasks compared to baseline methods.