Textured Geometry Evaluation: Perceptual 3D Textured Shape Metric via 3D Latent-Geometry Network

📅 2025-12-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing 3D quality assessment metrics (e.g., Chamfer Distance) exhibit poor correlation with human perception, while rendering-based learning methods suffer from view bias, incomplete structural coverage, and inadequate modeling of authentic distortions. To address these limitations, this work proposes the first end-to-end fidelity assessment method specifically designed for textured 3D meshes. We introduce a 3D latent geometric network that jointly encodes geometric structure and surface color features—bypassing rendering-induced artifacts. Furthermore, we construct the first high-quality, human-annotated dataset of textured 3D models exhibiting realistic distortions. Extensive experiments demonstrate that our method significantly outperforms state-of-the-art geometry- and rendering-based baselines in perceptual consistency, robustness, and generalization across diverse distortion types. On multiple benchmarks, it achieves up to a 12.6% improvement in Spearman rank-order correlation coefficient (SROCC).

Technology Category

Application Category

📝 Abstract
Textured high-fidelity 3D models are crucial for games, AR/VR, and film, but human-aligned evaluation methods still fall behind despite recent advances in 3D reconstruction and generation. Existing metrics, such as Chamfer Distance, often fail to align with how humans evaluate the fidelity of 3D shapes. Recent learning-based metrics attempt to improve this by relying on rendered images and 2D image quality metrics. However, these approaches face limitations due to incomplete structural coverage and sensitivity to viewpoint choices. Moreover, most methods are trained on synthetic distortions, which differ significantly from real-world distortions, resulting in a domain gap. To address these challenges, we propose a new fidelity evaluation method that is based directly on 3D meshes with texture, without relying on rendering. Our method, named Textured Geometry Evaluation TGE, jointly uses the geometry and color information to calculate the fidelity of the input textured mesh with comparison to a reference colored shape. To train and evaluate our metric, we design a human-annotated dataset with real-world distortions. Experiments show that TGE outperforms rendering-based and geometry-only methods on real-world distortion dataset.
Problem

Research questions and friction points this paper is trying to address.

Evaluates 3D textured shape fidelity aligned with human perception
Addresses limitations of rendering-based and geometry-only evaluation metrics
Uses a human-annotated dataset with real-world distortions for training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Direct 3D mesh evaluation without rendering
Joint geometry and color information utilization
Training on human-annotated real-world distortion dataset