UniUGG: Unified 3D Understanding and Generation via Geometric-Semantic Encoding

📅 2025-08-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Prior work has yet to achieve unified modeling of 3D understanding and generation. This paper introduces the first end-to-end unified framework that enables multimodal alignment among text, images, and 3D representations via joint geometric-semantic encoding. Methodologically, we design a spatial decoder and propose a geometric-semantic pretraining strategy that jointly optimizes a vision encoder—integrated with a large language model and a latent diffusion model—to support reference-image-guided 3D scene generation and spatial visual question answering. Our core contribution is the first holistic architecture that simultaneously enhances both 3D understanding and generation capabilities, thereby strengthening the joint perception of spatial structure and semantic content. Experiments demonstrate substantial improvements over state-of-the-art methods across 3D representation learning, generation fidelity, and spatial reasoning accuracy.

Technology Category

Application Category

📝 Abstract
Despite the impressive progress on understanding and generating images shown by the recent unified architectures, the integration of 3D tasks remains challenging and largely unexplored. In this paper, we introduce UniUGG, the first unified understanding and generation framework for 3D modalities. Our unified framework employs an LLM to comprehend and decode sentences and 3D representations. At its core, we propose a spatial decoder leveraging a latent diffusion model to generate high-quality 3D representations. This allows for the generation and imagination of 3D scenes based on a reference image and an arbitrary view transformation, while remaining supports for spatial visual question answering (VQA) tasks. Additionally, we propose a geometric-semantic learning strategy to pretrain the vision encoder. This design jointly captures the input's semantic and geometric cues, enhancing both spatial understanding and generation. Extensive experimental results demonstrate the superiority of our method in visual representation, spatial understanding, and 3D generation. The source code will be released upon paper acceptance.
Problem

Research questions and friction points this paper is trying to address.

Unifying 3D understanding and generation tasks
Integrating geometric and semantic cues for 3D representation
Generating 3D scenes from reference images and transformations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified framework using LLM for 3D understanding
Latent diffusion model for 3D representation generation
Geometric-semantic learning strategy for vision encoder
🔎 Similar Papers
No similar papers found.
Y
Yueming Xu
Fudan University
J
Jiahui Zhang
Fudan University
Z
Ze Huang
Fudan University
Y
Yurui Chen
Fudan University
Yanpeng Zhou
Yanpeng Zhou
NOAH'S ARK LAB
Z
Zhenyu Chen
Huawei Noah’s Ark Lab
Yu-Jie Yuan
Yu-Jie Yuan
Institute of Computing Technology, Chinese Academy of Sciences
Computer Graphics3D VisionMLLM
P
Pengxiang Xia
Huawei Noah’s Ark Lab
G
Guowei Huang
Huawei Noah’s Ark Lab
X
Xinyue Cai
Huawei Noah’s Ark Lab
Z
Zhongang Qi
Huawei Noah’s Ark Lab
X
Xingyue Quan
Huawei Noah’s Ark Lab
Jianye Hao
Jianye Hao
Huawei Noah's Ark Lab/Tianjin University
Multiagent SystemsEmbodied AI
H
Hang Xu
Huawei Noah’s Ark Lab
L
Li Zhang
Fudan University