DreamLifting: A Plug-in Module Lifting MV Diffusion Models for 3D Asset Generation

📅 2025-09-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the geometric–material modeling decoupling problem in end-to-end generation of physically based rendering (PBR)-ready 3D assets. We propose the Lightweight Gaussian Asset Adapter (LGAA), which unifies geometry and material synthesis via a multi-view diffusion model as a shared prior. LGAA reuses pretrained knowledge through a plug-and-play wrapper, aligns multiple priors via a learnable Switcher module, and employs a tamed-VAE–based dedicated decoder coupled with 2D Gaussian lattice rendering to efficiently produce re-lightable triangle meshes. Trained on only 69K samples, LGAA achieves rapid convergence and generates high-fidelity PBR meshes under both text- and image-conditioned guidance. The method significantly reduces data dependency while demonstrating strong generalization and modular plug-and-play capability—enabling seamless integration into existing generative pipelines.

Technology Category

Application Category

📝 Abstract
The labor- and experience-intensive creation of 3D assets with physically based rendering (PBR) materials demands an autonomous 3D asset creation pipeline. However, most existing 3D generation methods focus on geometry modeling, either baking textures into simple vertex colors or leaving texture synthesis to post-processing with image diffusion models. To achieve end-to-end PBR-ready 3D asset generation, we present Lightweight Gaussian Asset Adapter (LGAA), a novel framework that unifies the modeling of geometry and PBR materials by exploiting multi-view (MV) diffusion priors from a novel perspective. The LGAA features a modular design with three components. Specifically, the LGAA Wrapper reuses and adapts network layers from MV diffusion models, which encapsulate knowledge acquired from billions of images, enabling better convergence in a data-efficient manner. To incorporate multiple diffusion priors for geometry and PBR synthesis, the LGAA Switcher aligns multiple LGAA Wrapper layers encapsulating different knowledge. Then, a tamed variational autoencoder (VAE), termed LGAA Decoder, is designed to predict 2D Gaussian Splatting (2DGS) with PBR channels. Finally, we introduce a dedicated post-processing procedure to effectively extract high-quality, relightable mesh assets from the resulting 2DGS. Extensive quantitative and qualitative experiments demonstrate the superior performance of LGAA with both text-and image-conditioned MV diffusion models. Additionally, the modular design enables flexible incorporation of multiple diffusion priors, and the knowledge-preserving scheme leads to efficient convergence trained on merely 69k multi-view instances. Our code, pre-trained weights, and the dataset used will be publicly available via our project page: https://zx-yin.github.io/dreamlifting/.
Problem

Research questions and friction points this paper is trying to address.

Generating end-to-end PBR-ready 3D assets autonomously
Unifying geometry and PBR material modeling using diffusion priors
Enabling efficient convergence with minimal multi-view training data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modular framework unifies geometry and PBR materials
Reuses diffusion model layers for efficient convergence
Decodes 2D Gaussian Splatting with PBR channels
🔎 Similar Papers
No similar papers found.
Ze-Xin Yin
Ze-Xin Yin
Nankai University
3D computer visionNeural Radiance Fields
Jiaxiong Qiu
Jiaxiong Qiu
Unknown affiliation
3D reconstructionneural renderingdeep learning
L
Liu Liu
Horizon Robotics
X
Xinjie Wang
Horizon Robotics
Wei Sui
Wei Sui
Horizon Robotics
3D VisionBev Perception3D Reconstruction
Zhizhong Su
Zhizhong Su
Horizon Robotics
Deep LearningComputer VisionAutonomous DrivingRobotics Learning
J
Jian Yang
PCA Lab, VCIP, College of Computer Science, Nankai University
J
Jin Xie
School of Intelligence Science and Technology, Nanjing University, Suzhou, China