TEXTRIX: Latent Attribute Grid for Native Texture Generation and Beyond

πŸ“… 2025-12-02
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing 3D texture generation methods suffer from inter-view inconsistency and incomplete coverage of complex surfaces due to multi-view fusion, compromising texture fidelity and geometric integrity. To address this, we propose TEXTRIXβ€”a novel framework that introduces the first implicit 3D attribute grid, unifying texture synthesis and semantic segmentation directly in native voxel space and thereby eliminating the need for explicit view alignment. We further design a sparse-attention diffusion Transformer that jointly optimizes high-resolution texture generation and part-level semantic prediction on this grid. Our approach simultaneously enhances texture seamlessness, geometric consistency, and segmentation boundary accuracy. Extensive experiments demonstrate state-of-the-art performance on both 3D texture generation and 3D semantic segmentation benchmarks.

Technology Category

Application Category

πŸ“ Abstract
Prevailing 3D texture generation methods, which often rely on multi-view fusion, are frequently hindered by inter-view inconsistencies and incomplete coverage of complex surfaces, limiting the fidelity and completeness of the generated content. To overcome these challenges, we introduce TEXTRIX, a native 3D attribute generation framework for high-fidelity texture synthesis and downstream applications such as precise 3D part segmentation. Our approach constructs a latent 3D attribute grid and leverages a Diffusion Transformer equipped with sparse attention, enabling direct coloring of 3D models in volumetric space and fundamentally avoiding the limitations of multi-view fusion. Built upon this native representation, the framework naturally extends to high-precision 3D segmentation by training the same architecture to predict semantic attributes on the grid. Extensive experiments demonstrate state-of-the-art performance on both tasks, producing seamless, high-fidelity textures and accurate 3D part segmentation with precise boundaries.
Problem

Research questions and friction points this paper is trying to address.

Generates high-fidelity 3D textures directly in volumetric space
Overcomes multi-view fusion inconsistencies and incomplete surface coverage
Extends native 3D attribute generation to precise part segmentation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Latent 3D attribute grid for direct volumetric coloring
Diffusion Transformer with sparse attention mechanism
Unified architecture for texture synthesis and segmentation
πŸ”Ž Similar Papers
No similar papers found.