🤖 AI Summary
Existing spaceborne LiDAR observations of forest canopy height are limited by spatial sparsity and uncertainty, hindering high-resolution, continuous mapping. This study presents the first integration of canopy height models derived from airborne LiDAR data across multiple countries (~16,000 km²) with 3-meter-resolution PlanetScope satellite RGB imagery, leveraging the Depth Anything V2 monocular depth estimation framework for end-to-end training. The resulting model enables accurate and scalable canopy height retrieval without requiring stereo or LiDAR inputs. Independent validation in China (~1 km²) and the United States (~116 km²) yielded biases of 0.59 m and 0.41 m, and RMSEs of 2.54 m and 5.75 m, respectively—outperforming current global products by reducing mean absolute error by approximately 1.5 m and RMSE by about 2 m.
📝 Abstract
Large-scale, high-resolution forest canopy height mapping plays a crucial role in understanding regional and global carbon and water cycles. Spaceborne LiDAR missions, including the Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2) and the Global Ecosystem Dynamics Investigation (GEDI), provide global observations of forest structure but are spatially sparse and subject to inherent uncertainties. In contrast, near-surface LiDAR platforms, such as airborne and unmanned aerial vehicle (UAV) LiDAR systems, offer much finer measurements of forest canopy structure, and a growing number of countries have made these datasets openly available. In this study, a state-of-the-art monocular depth estimation model, Depth Anything V2, was trained using approximately 16,000 km2 of canopy height models (CHMs) derived from publicly available airborne LiDAR point clouds and related products across multiple countries, together with 3 m resolution PlanetScope and airborne RGB imagery. The trained model, referred to as Depth2CHM, enables the estimation of spatially continuous CHMs directly from PlanetScope RGB imagery. Independent validation was conducted at sites in China (approximately 1 km2) and the United States (approximately 116 km2). The results showed that Depth2CHM could accurately estimate canopy height, with biases of 0.59 m and 0.41 m and root mean square errors (RMSEs) of 2.54 m and 5.75 m for these two sites, respectively. Compared with an existing global meter-resolution CHM product, the mean absolute error is reduced by approximately 1.5 m and the RMSE by approximately 2 m. These results demonstrated that monocular depth estimation networks trained with large-scale airborne LiDAR-derived canopy height data provide a promising and scalable pathway for high-resolution, spatially continuous forest canopy height estimation from satellite RGB imagery.