🤖 AI Summary
Existing 3D-native diffusion models struggle with geometric distortion, cross-view texture inconsistency, limited controllability, and scarce training data when generating high-fidelity, PBR-textured 3D meshes from a single image. To address these challenges, we propose RenderDiffusion: a framework featuring a ray-tracing-regularized point-cloud–shape autoencoder for joint geometry-rendering optimization; geometry-aware generative multi-view data augmentation; and a reference-attention-driven multi-view ControlNet coupled with a PBR component decomposer, enabling implicit texture completion in UV space. Evaluated on ShapeNet and Objaverse, RenderDiffusion significantly outperforms state-of-the-art methods across all key metrics—geometric accuracy, PBR texture consistency, cross-view coherence, and text/sketch controllability—enabling industrial-grade, high-quality 3D asset generation.
📝 Abstract
In this paper, we introduce MeshGen, an advanced image-to-3D pipeline that generates high-quality 3D meshes with detailed geometry and physically based rendering (PBR) textures. Addressing the challenges faced by existing 3D native diffusion models, such as suboptimal auto-encoder performance, limited controllability, poor generalization, and inconsistent image-based PBR texturing, MeshGen employs several key innovations to overcome these limitations. We pioneer a render-enhanced point-to-shape auto-encoder that compresses meshes into a compact latent space by designing perceptual optimization with ray-based regularization. This ensures that the 3D shapes are accurately represented and reconstructed to preserve geometric details within the latent space. To address data scarcity and image-shape misalignment, we further propose geometric augmentation and generative rendering augmentation techniques, which enhance the model's controllability and generalization ability, allowing it to perform well even with limited public datasets. For the texture generation, MeshGen employs a reference attention-based multi-view ControlNet for consistent appearance synthesis. This is further complemented by our multi-view PBR decomposer that estimates PBR components and a UV inpainter that fills invisible areas, ensuring a seamless and consistent texture across the 3D mesh. Our extensive experiments demonstrate that MeshGen largely outperforms previous methods in both shape and texture generation, setting a new standard for the quality of 3D meshes generated with PBR textures. See our code at https://github.com/heheyas/MeshGen, project page https://heheyas.github.io/MeshGen