From Canopy to Ground via ForestGen3D: Learning Cross-Domain Generation of 3D Forest Structure from Aerial-to-Terrestrial LiDAR

📅 2025-09-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Airborne LiDAR (ALS) struggles to faithfully reconstruct occluded sub-canopy and understory structures, hindering large-scale ecological modeling. Method: We propose ForestGen3D—the first 3D forest generation framework based on conditional denoising diffusion probabilistic models (DDPMs)—that takes sparse ALS point clouds as input and incorporates geometric prior constraints to synthesize cross-domain 3D forest structures from canopy to ground. It provides quantifiable quality proxy metrics even without ground-truth terrestrial LiDAR (TLS) data. Results: Validated across multiple field sites, ForestGen3D accurately recovers key biophysical parameters—including tree height, diameter at breast height (DBH), and crown width—and generates point clouds highly consistent with real TLS data (42% reduction in Chamfer distance). The framework achieves high fidelity, ecological plausibility, and scalability to large regions, establishing a new paradigm for cost-effective, high-accuracy 3D forest monitoring and wildfire simulation.

Technology Category

Application Category

📝 Abstract
The 3D structure of living and non-living components in ecosystems plays a critical role in determining ecological processes and feedbacks from both natural and human-driven disturbances. Anticipating the effects of wildfire, drought, disease, or atmospheric deposition depends on accurate characterization of 3D vegetation structure, yet widespread measurement remains prohibitively expensive and often infeasible. We introduce ForestGen3D, a novel generative modeling framework that synthesizes high-fidelity 3D forest structure using only aerial LiDAR (ALS) inputs. ForestGen3D is based on conditional denoising diffusion probabilistic models (DDPMs) trained on co-registered ALS/TLS (terrestrial LiDAR) data. The model learns to generate TLS-like 3D point clouds conditioned on sparse ALS observations, effectively reconstructing occluded sub-canopy detail at scale. To ensure ecological plausibility, we introduce a geometric containment prior based on the convex hull of ALS observations and provide theoretical and empirical guarantees that generated structures remain spatially consistent. We evaluate ForestGen3D at tree, plot, and landscape scales using real-world data from mixed conifer ecosystems, and show that it produces high-fidelity reconstructions that closely match TLS references in terms of geometric similarity and biophysical metrics, such as tree height, DBH, crown diameter and crown volume. Additionally, we demonstrate that the containment property can serve as a practical proxy for generation quality in settings where TLS ground truth is unavailable. Our results position ForestGen3D as a scalable tool for ecological modeling, wildfire simulation, and structural fuel characterization in ALS-only environments.
Problem

Research questions and friction points this paper is trying to address.

Generating detailed 3D forest structure from sparse aerial LiDAR data
Reconstructing occluded sub-canopy vegetation details at multiple scales
Creating ecologically plausible 3D models for wildfire and ecological simulations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative model synthesizes 3D forest from aerial LiDAR
Uses conditional diffusion models to reconstruct occluded details
Ensures plausibility with geometric containment prior for consistency
🔎 Similar Papers
No similar papers found.
Juan Castorena
Juan Castorena
Los Alamos National Laboratories
Machine learningRoboticsOptimizationSignal ProcessingAutonomous vehicles
E
E. Louise Loudermilk
Southern Research Station, Disturbance and Prescribed Fire Laboratory, 320 E Green St., Athens, GA 30606.
S
Scott Pokswinski
New Mexico Consortium, 4200 W Jemez Rd 200, Los Alamos, NM 87544
R
Rodman Linn
Los Alamos National Laboratories, Los Alamos, NM, 48124 USA