MetaFind: Scene-Aware 3D Asset Retrieval for Coherent Metaverse Scene Generation

πŸ“… 2025-10-05
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing 3D asset retrieval methods suffer from inconsistent modeling of spatial-semantic-style constraints and the absence of a dedicated retrieval paradigm. This paper proposes a trimodal joint retrieval framework tailored for metaverse scene generation, supporting arbitrary query combinations of text, image, and 3D inputs. Our key contributions are: (1) an equivariant scene layout encoder (ESSGNN) that ensures coordinate transformation invariance while enabling context-aware iterative scene construction; and (2) unified trimodal feature fusion with joint learning of object appearance and scene layout, thereby harmonizing spatial compatibility and stylistic consistency. Extensive experiments on large-scale 3D asset repositories demonstrate significant improvements in cross-modal retrieval accuracy, layout plausibility, and stylistic coherence. The framework enables efficient, controllable, and semantically grounded scene generation for metaverse applications.

Technology Category

Application Category

πŸ“ Abstract
We present MetaFind, a scene-aware tri-modal compositional retrieval framework designed to enhance scene generation in the metaverse by retrieving 3D assets from large-scale repositories. MetaFind addresses two core challenges: (i) inconsistent asset retrieval that overlooks spatial, semantic, and stylistic constraints, and (ii) the absence of a standardized retrieval paradigm specifically tailored for 3D asset retrieval, as existing approaches mainly rely on general-purpose 3D shape representation models. Our key innovation is a flexible retrieval mechanism that supports arbitrary combinations of text, image, and 3D modalities as queries, enhancing spatial reasoning and style consistency by jointly modeling object-level features (including appearance) and scene-level layout structures. Methodologically, MetaFind introduces a plug-and-play equivariant layout encoder ESSGNN that captures spatial relationships and object appearance features, ensuring retrieved 3D assets are contextually and stylistically coherent with the existing scene, regardless of coordinate frame transformations. The framework supports iterative scene construction by continuously adapting retrieval results to current scene updates. Empirical evaluations demonstrate the improved spatial and stylistic consistency of MetaFind in various retrieval tasks compared to baseline methods.
Problem

Research questions and friction points this paper is trying to address.

Retrieving 3D assets with spatial, semantic, and stylistic consistency
Establishing a standardized retrieval paradigm for 3D assets
Supporting multimodal queries for coherent metaverse scene generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tri-modal retrieval using text, image, and 3D queries
Plug-and-play equivariant layout encoder captures spatial relationships
Iterative scene construction adapts retrieval to scene updates
πŸ”Ž Similar Papers
No similar papers found.