🤖 AI Summary
This work addresses the lack of reproducible and quantifiable evaluation benchmarks in existing digital twin generation methods, which often rely on subjective qualitative comparisons. To this end, the paper proposes a synthetic image generation framework based on high-fidelity 3D models and programmable camera poses, enabling systematic quantitative assessment of reconstruction results under known ground-truth parameters. The approach introduces, for the first time, a programmable virtual environment coupled with a ground-truth parameter reference mechanism, integrating procedural trajectory generation, photorealistic rendering, and feature-point triangulation-based reconstruction. This framework establishes the first benchmark for digital twin evaluation that supports reproducible and objective comparisons, significantly enhancing the consistency and scientific rigor of assessments across different generation strategies.
📝 Abstract
The generation of 3D models from real-world objects has often been accomplished through photogrammetry, i.e., by taking 2D photos from a variety of perspectives and then triangulating matched point-based features to create a textured mesh. Many design choices exist within this framework for the generation of digital twins, and differences between such approaches are largely judged qualitatively. Here, we present and test a novel pipeline for generating synthetic images from high-quality 3D models and programmatically generated camera poses. This enables a wide variety of repeatable, quantifiable experiments which can compare ground-truth knowledge of virtual camera parameters and of virtual objects against the reconstructed estimations of those perspectives and subjects.