🤖 AI Summary
In current aerial and UAV photogrammetry, multi-view stereo (MVS)–generated point clouds lack standardized, certifiable per-point uncertainty quantification, primarily due to the non-differentiable, multimodal nature of MVS and ill-defined error propagation paths.
Method: This paper proposes an error-propagation–based uncertainty quantification framework: first, a disparity uncertainty regression model is constructed by statistically fusing reprojection errors from structure-from-motion (SfM) and bundle adjustment with MVS-specific features (e.g., matching cost); second, a self-calibration mechanism is designed, leveraging high-confidence 3D points to invert disparity uncertainty—enabling end-to-end, physically interpretable accuracy assessment.
Results: Evaluated on multiple public aerial and UAV datasets, the method achieves high point cloud coverage while significantly mitigating uncertainty overestimation, thereby enhancing the reliability and credibility of point cloud accuracy.
📝 Abstract
Uncertainty quantification of the photogrammetry process is essential for providing per-point accuracy credentials of the point clouds. Unlike airborne LiDAR, which typically delivers consistent accuracy across various scenes, the accuracy of photogrammetric point clouds is highly scene-dependent, since it relies on algorithm-generated measurements (i.e., stereo or multi-view stereo). Generally, errors of the photogrammetric point clouds propagate through a two-step process: Structure-from-Motion (SfM) with Bundle adjustment (BA), followed by Multi-view Stereo (MVS). While uncertainty estimation in the SfM stage has been well studied using the first-order statistics of the reprojection error function, that in the MVS stage remains largely unsolved and non-standardized, primarily due to its non-differentiable and multi-modal nature (i.e., from pixel values to geometry). In this paper, we present an uncertainty quantification framework closing this gap by associating an error covariance matrix per point accounting for this two-step photogrammetry process. Specifically, to estimate the uncertainty in the MVS stage, we propose a novel, self-calibrating method by taking reliable n-view points (n>=6) per-view to regress the disparity uncertainty using highly relevant cues (such as matching cost values) from the MVS stage. Compared to existing approaches, our method uses self-contained, reliable 3D points extracted directly from the MVS process, with the benefit of being self-supervised and naturally adhering to error propagation path of the photogrammetry process, thereby providing a robust and certifiable uncertainty quantification across diverse scenes. We evaluate the framework using a variety of publicly available airborne and UAV imagery datasets. Results demonstrate that our method outperforms existing approaches by achieving high bounding rates without overestimating uncertainty.