LaFiTe: A Generative Latent Field for 3D Native Texturing

📅 2025-12-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing 3D texture generation methods face fundamental limitations in fidelity and generalization—namely, seam artifacts from UV parameterization, view dependency inherent in multi-view projection, and quality degradation in native approaches due to insufficiently expressive latent representations. To address these challenges, we propose the first generative sparse latent color field framework natively designed for 3D texture synthesis. Our method decouples texture appearance from mesh topology by learning a structured sparse latent space via a VAE, and employs a conditional correction flow model to enable high-fidelity, continuous color field decoding and controllable generation. Experiments demonstrate that our approach achieves PSNR improvements exceeding 10 dB over state-of-the-art methods, significantly enhancing seamlessness, cross-shape and cross-style generalization, and supporting downstream tasks including material composition and texture super-resolution.

Technology Category

Application Category

📝 Abstract
Generating high-fidelity, seamless textures directly on 3D surfaces, what we term 3D-native texturing, remains a fundamental open challenge, with the potential to overcome long-standing limitations of UV-based and multi-view projection methods. However, existing native approaches are constrained by the absence of a powerful and versatile latent representation, which severely limits the fidelity and generality of their generated textures. We identify this representation gap as the principal barrier to further progress. We introduce LaFiTe, a framework that addresses this challenge by learning to generate textures as a 3D generative sparse latent color field. At its core, LaFiTe employs a variational autoencoder (VAE) to encode complex surface appearance into a sparse, structured latent space, which is subsequently decoded into a continuous color field. This representation achieves unprecedented fidelity, exceeding state-of-the-art methods by >10 dB PSNR in reconstruction, by effectively disentangling texture appearance from mesh topology and UV parameterization. Building upon this strong representation, a conditional rectified-flow model synthesizes high-quality, coherent textures across diverse styles and geometries. Extensive experiments demonstrate that LaFiTe not only sets a new benchmark for 3D-native texturing but also enables flexible downstream applications such as material synthesis and texture super-resolution, paving the way for the next generation of 3D content creation workflows.
Problem

Research questions and friction points this paper is trying to address.

Generates high-fidelity seamless textures directly on 3D surfaces
Overcomes limitations of UV-based and multi-view projection methods
Learns a sparse latent representation to disentangle appearance from geometry
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses variational autoencoder for sparse latent color field
Decodes latent space into continuous color field representation
Employs conditional rectified-flow model for texture synthesis
🔎 Similar Papers
No similar papers found.