Can3Tok: Canonical 3D Tokenization and Latent Modeling of Scene-Level 3D Gaussians

📅 2025-08-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing 3D generation methods predominantly target object-level synthesis; scene-level generation remains fundamentally constrained by the lack of a scalable latent representation learning framework—particularly due to the unbounded spatial extent and cross-scene scale inconsistency inherent in 3D Gaussian Splatting (3DGS) representations. Method: We propose SceneVAE, the first variational autoencoder tailored for large-scale 3D Gaussian primitives. It integrates a standardized 3D tokenization procedure with a custom preprocessing pipeline to enable unified latent-space modeling of unstructured, scale-heterogeneous scenes. Contribution/Results: SceneVAE supports end-to-end image- and text-to-3D-scene generation. On DL3DV-10K, it achieves the first stable training and zero-failure inference for scene-level 3DGS generation, significantly improving cross-scene generalization and reconstruction fidelity. This work establishes a foundational framework for scalable, generative 3D scene synthesis.

Technology Category

Application Category

📝 Abstract
3D generation has made significant progress, however, it still largely remains at the object-level. Feedforward 3D scene-level generation has been rarely explored due to the lack of models capable of scaling-up latent representation learning on 3D scene-level data. Unlike object-level generative models, which are trained on well-labeled 3D data in a bounded canonical space, scene-level generations with 3D scenes represented by 3D Gaussian Splatting (3DGS) are unbounded and exhibit scale inconsistency across different scenes, making unified latent representation learning for generative purposes extremely challenging. In this paper, we introduce Can3Tok, the first 3D scene-level variational autoencoder (VAE) capable of encoding a large number of Gaussian primitives into a low-dimensional latent embedding, which effectively captures both semantic and spatial information of the inputs. Beyond model design, we propose a general pipeline for 3D scene data processing to address scale inconsistency issue. We validate our method on the recent scene-level 3D dataset DL3DV-10K, where we found that only Can3Tok successfully generalizes to novel 3D scenes, while compared methods fail to converge on even a few hundred scene inputs during training and exhibit zero generalization ability during inference. Finally, we demonstrate image-to-3DGS and text-to-3DGS generation as our applications to demonstrate its ability to facilitate downstream generation tasks.
Problem

Research questions and friction points this paper is trying to address.

Addresses scale inconsistency in 3D scene-level generative models
Encodes Gaussian primitives into low-dimensional latent embeddings
Enables feedforward 3D scene generation from images or text
Innovation

Methods, ideas, or system contributions that make the work stand out.

First 3D scene-level VAE for Gaussian primitives
Pipeline for scale-inconsistent 3D scene processing
Latent embedding captures semantic and spatial info
🔎 Similar Papers
No similar papers found.