🤖 AI Summary
Existing 3D generation methods predominantly target object-level synthesis; scene-level generation remains fundamentally constrained by the lack of a scalable latent representation learning framework—particularly due to the unbounded spatial extent and cross-scene scale inconsistency inherent in 3D Gaussian Splatting (3DGS) representations.
Method: We propose SceneVAE, the first variational autoencoder tailored for large-scale 3D Gaussian primitives. It integrates a standardized 3D tokenization procedure with a custom preprocessing pipeline to enable unified latent-space modeling of unstructured, scale-heterogeneous scenes.
Contribution/Results: SceneVAE supports end-to-end image- and text-to-3D-scene generation. On DL3DV-10K, it achieves the first stable training and zero-failure inference for scene-level 3DGS generation, significantly improving cross-scene generalization and reconstruction fidelity. This work establishes a foundational framework for scalable, generative 3D scene synthesis.
📝 Abstract
3D generation has made significant progress, however, it still largely remains at the object-level. Feedforward 3D scene-level generation has been rarely explored due to the lack of models capable of scaling-up latent representation learning on 3D scene-level data. Unlike object-level generative models, which are trained on well-labeled 3D data in a bounded canonical space, scene-level generations with 3D scenes represented by 3D Gaussian Splatting (3DGS) are unbounded and exhibit scale inconsistency across different scenes, making unified latent representation learning for generative purposes extremely challenging. In this paper, we introduce Can3Tok, the first 3D scene-level variational autoencoder (VAE) capable of encoding a large number of Gaussian primitives into a low-dimensional latent embedding, which effectively captures both semantic and spatial information of the inputs. Beyond model design, we propose a general pipeline for 3D scene data processing to address scale inconsistency issue. We validate our method on the recent scene-level 3D dataset DL3DV-10K, where we found that only Can3Tok successfully generalizes to novel 3D scenes, while compared methods fail to converge on even a few hundred scene inputs during training and exhibit zero generalization ability during inference. Finally, we demonstrate image-to-3DGS and text-to-3DGS generation as our applications to demonstrate its ability to facilitate downstream generation tasks.