🤖 AI Summary
This study addresses the challenging problem of reconstructing high-fidelity 3D tree point clouds from a single orthophoto and a digital surface model (DSM) without access to real 3D annotations, species labels, or ground-based laser scans. The authors propose a neural reconstruction framework trained exclusively on procedurally generated synthetic tree data, leveraging geometric supervision together with differentiable shading and silhouette rendering losses to accurately recover fine structural details of trees in real-world scenes. As the first single-view tree reconstruction method to integrate differentiable rendering and synthetic data under fully unsupervised conditions, it outperforms existing approaches in terms of reconstruction quality, structural plausibility, and generalization capability, making it well-suited for applications such as interactive 3D digital mapping.
📝 Abstract
We present TreeON, a novel neural-based framework for reconstructing detailed 3D tree point clouds from sparse top-down geodata, using only a single orthophoto and its corresponding Digital Surface Model (DSM). Our method introduces a new training supervision strategy that combines both geometric supervision and differentiable shadow and silhouette losses to learn point cloud representations of trees without requiring species labels, procedural rules, terrestrial reconstruction data, or ground laser scans. To address the lack of ground truth data, we generate a synthetic dataset of point clouds from procedurally modeled trees and train our network on it. Quantitative and qualitative experiments demonstrate better reconstruction quality and coverage compared to existing methods, as well as strong generalization to real-world data, producing visually appealing and structurally plausible tree point cloud representations suitable for integration into interactive digital 3D maps. The codebase, synthetic dataset, and pretrained model are publicly available at https://angelikigram.github.io/treeON/.