Omni123: Exploring 3D Native Foundation Models with Limited 3D Data by Unifying Text to 2D and 3D Generation

📅 2026-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of achieving geometrically consistent native 3D generation under data scarcity, where existing approaches rely on indirect 2D optimization pipelines. The authors propose Omni123, the first autoregressive, 3D-native foundation model that unifies text-to-2D and text-to-3D generation within a shared sequence space by representing text, images, and 3D shapes as discrete tokens. Omni123 introduces cross-modal consistency as an implicit structural constraint and employs an interleaved X-to-X training paradigm to harmonize multimodal tasks without requiring complete (text, image, 3D) triplets. Through a semantic–visual–geometric cycle reasoning mechanism, the model substantially enhances geometric coherence and appearance fidelity in text-guided 3D synthesis, establishing a new foundation for scalable multimodal 3D world models.
📝 Abstract
Recent multimodal large language models have achieved strong performance in unified text and image understanding and generation, yet extending such native capability to 3D remains challenging due to limited data. Compared to abundant 2D imagery, high-quality 3D assets are scarce, making 3D synthesis under-constrained. Existing methods often rely on indirect pipelines that edit in 2D and lift results into 3D via optimization, sacrificing geometric consistency. We present Omni123, a 3D-native foundation model that unifies text-to-2D and text-to-3D generation within a single autoregressive framework. Our key insight is that cross-modal consistency between images and 3D can serve as an implicit structural constraint. By representing text, images, and 3D as discrete tokens in a shared sequence space, the model leverages abundant 2D data as a geometric prior to improve 3D representations. We introduce an interleaved X-to-X training paradigm that coordinates diverse cross-modal tasks over heterogeneous paired datasets without requiring fully aligned text-image-3D triplets. By traversing semantic-visual-geometric cycles (e.g., text to image to 3D to image) within autoregressive sequences, the model jointly enforces semantic alignment, appearance fidelity, and multi-view geometric consistency. Experiments show that Omni123 significantly improves text-guided 3D generation and editing, demonstrating a scalable path toward multimodal 3D world models.
Problem

Research questions and friction points this paper is trying to address.

3D generation
limited 3D data
geometric consistency
multimodal foundation model
text-to-3D
Innovation

Methods, ideas, or system contributions that make the work stand out.

3D-native foundation model
unified text-to-2D/3D generation
cross-modal consistency
autoregressive sequence modeling
geometric prior from 2D data
🔎 Similar Papers
No similar papers found.