π€ AI Summary
Current 3D texture generation heavily relies on manual UV mappingβtime-consuming and lacking semantic awareness and visibility considerations. To address this, we propose the first unsupervised, differentiable UV parameterization framework that jointly incorporates semantic consistency and visibility awareness. Our method (1) achieves semantically coherent UV chart decomposition via mesh semantic segmentation and cross-shape semantic alignment; (2) introduces ambient occlusion (AO)-weighted soft seam optimization to implicitly guide cuts toward low-visibility regions; and (3) designs an end-to-end trainable backbone that jointly optimizes UV parameterization and seam distribution. Quantitative and qualitative evaluations across multiple benchmarks demonstrate that our approach significantly reduces visible seam artifacts and substantially improves downstream texture generation quality and visual naturalness. This work establishes a new paradigm for automated, high-fidelity 3D content generation.
π Abstract
Recent 3D generative models produce high-quality textures for 3D mesh objects. However, they commonly rely on the heavy assumption that input 3D meshes are accompanied by manual mesh parameterization (UV mapping), a manual task that requires both technical precision and artistic judgment. Industry surveys show that this process often accounts for a significant share of asset creation, creating a major bottleneck for 3D content creators. Moreover, existing automatic methods often ignore two perceptually important criteria: (1) semantic awareness (UV charts should align semantically similar 3D parts across shapes) and (2) visibility awareness (cutting seams should lie in regions unlikely to be seen). To overcome these shortcomings and to automate the mesh parameterization process, we present an unsupervised differentiable framework that augments standard geometry-preserving UV learning with semantic- and visibility-aware objectives. For semantic-awareness, our pipeline (i) segments the mesh into semantic 3D parts, (ii) applies an unsupervised learned per-part UV-parameterization backbone, and (iii) aggregates per-part charts into a unified UV atlas. For visibility-awareness, we use ambient occlusion (AO) as an exposure proxy and back-propagate a soft differentiable AO-weighted seam objective to steer cutting seams toward occluded regions. By conducting qualitative and quantitative evaluations against state-of-the-art methods, we show that the proposed method produces UV atlases that better support texture generation and reduce perceptible seam artifacts compared to recent baselines. Our implementation code is publicly available at: https://github.com/AHHHZ975/Semantic-Visibility-UV-Param.