UniTEX: Universal High Fidelity Generative Texturing for 3D Shapes

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods rely on UV mapping for multi-view image-to-3D-texture reprojection and inpainting, suffering from topological ambiguity that induces geometric inconsistency and texture distortion. This paper proposes a two-stage 3D texture generation framework that bypasses explicit UV parameterization and directly models texture within a unified 3D functional space. Key contributions include: (1) the first continuous voxel-based representation of Texture Functions (TFs); (2) the construction of the first Large Texture Model (LTM), a scalable, text-conditioned foundation model for 3D texture generation; and (3) a LoRA-adapted Diffusion Transformer (DiT) architecture enabling joint multi-view optimization. Experiments demonstrate state-of-the-art performance in visual fidelity, geometric consistency, and cross-shape generalization. The framework enables fully automatic, high-quality, and scalable 3D texture synthesis without manual UV unwrapping or mesh editing.

Technology Category

Application Category

📝 Abstract
We present UniTEX, a novel two-stage 3D texture generation framework to create high-quality, consistent textures for 3D assets. Existing approaches predominantly rely on UV-based inpainting to refine textures after reprojecting the generated multi-view images onto the 3D shapes, which introduces challenges related to topological ambiguity. To address this, we propose to bypass the limitations of UV mapping by operating directly in a unified 3D functional space. Specifically, we first propose that lifts texture generation into 3D space via Texture Functions (TFs)--a continuous, volumetric representation that maps any 3D point to a texture value based solely on surface proximity, independent of mesh topology. Then, we propose to predict these TFs directly from images and geometry inputs using a transformer-based Large Texturing Model (LTM). To further enhance texture quality and leverage powerful 2D priors, we develop an advanced LoRA-based strategy for efficiently adapting large-scale Diffusion Transformers (DiTs) for high-quality multi-view texture synthesis as our first stage. Extensive experiments demonstrate that UniTEX achieves superior visual quality and texture integrity compared to existing approaches, offering a generalizable and scalable solution for automated 3D texture generation. Code will available in: https://github.com/YixunLiang/UniTEX.
Problem

Research questions and friction points this paper is trying to address.

Generates high-quality 3D textures bypassing UV mapping limitations
Uses continuous volumetric representation for texture generation
Leverages 2D priors for multi-view texture synthesis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates textures in 3D functional space
Uses transformer-based Large Texturing Model
Adapts Diffusion Transformers for texture synthesis
🔎 Similar Papers
No similar papers found.