🤖 AI Summary
Existing partial-observation 3D generation methods rely on dense global attention, resulting in quadratic computational complexity with respect to the number of components—hindering scalable, fine-grained compositional 3D asset synthesis. To address this, we propose a scalable sparse global attention framework: first, components are ranked by importance; then, top-k routing selects salient components, while non-salient ones undergo semantic compression to preserve contextual priors with drastically reduced computation. Furthermore, we introduce a hybrid attention mechanism that jointly models local geometric details and global structural relationships. Experiments demonstrate that our method significantly outperforms existing baselines on compositional 3D object and scene generation, enabling high-fidelity, efficient synthesis for scenes comprising hundreds of components.
📝 Abstract
Compositionality is critical for 3D object and scene generation, but existing part-aware 3D generation methods suffer from poor scalability due to quadratic global attention costs when increasing the number of components. In this work, we present MoCA, a compositional 3D generative model with two key designs: (1) importance-based component routing that selects top-k relevant components for sparse global attention, and (2) unimportant components compression that preserve contextual priors of unselected components while reducing computational complexity of global attention. With these designs, MoCA enables efficient, fine-grained compositional 3D asset creation with scalable number of components. Extensive experiments show MoCA outperforms baselines on both compositional object and scene generation tasks. Project page: https://lizhiqi49.github.io/MoCA