Uncertainty Quantification Framework for Aerial and UAV Photogrammetry through Error Propagation

📅 2025-07-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In current aerial and UAV photogrammetry, multi-view stereo (MVS)–generated point clouds lack standardized, certifiable per-point uncertainty quantification, primarily due to the non-differentiable, multimodal nature of MVS and ill-defined error propagation paths. Method: This paper proposes an error-propagation–based uncertainty quantification framework: first, a disparity uncertainty regression model is constructed by statistically fusing reprojection errors from structure-from-motion (SfM) and bundle adjustment with MVS-specific features (e.g., matching cost); second, a self-calibration mechanism is designed, leveraging high-confidence 3D points to invert disparity uncertainty—enabling end-to-end, physically interpretable accuracy assessment. Results: Evaluated on multiple public aerial and UAV datasets, the method achieves high point cloud coverage while significantly mitigating uncertainty overestimation, thereby enhancing the reliability and credibility of point cloud accuracy.

Technology Category

Application Category

📝 Abstract
Uncertainty quantification of the photogrammetry process is essential for providing per-point accuracy credentials of the point clouds. Unlike airborne LiDAR, which typically delivers consistent accuracy across various scenes, the accuracy of photogrammetric point clouds is highly scene-dependent, since it relies on algorithm-generated measurements (i.e., stereo or multi-view stereo). Generally, errors of the photogrammetric point clouds propagate through a two-step process: Structure-from-Motion (SfM) with Bundle adjustment (BA), followed by Multi-view Stereo (MVS). While uncertainty estimation in the SfM stage has been well studied using the first-order statistics of the reprojection error function, that in the MVS stage remains largely unsolved and non-standardized, primarily due to its non-differentiable and multi-modal nature (i.e., from pixel values to geometry). In this paper, we present an uncertainty quantification framework closing this gap by associating an error covariance matrix per point accounting for this two-step photogrammetry process. Specifically, to estimate the uncertainty in the MVS stage, we propose a novel, self-calibrating method by taking reliable n-view points (n>=6) per-view to regress the disparity uncertainty using highly relevant cues (such as matching cost values) from the MVS stage. Compared to existing approaches, our method uses self-contained, reliable 3D points extracted directly from the MVS process, with the benefit of being self-supervised and naturally adhering to error propagation path of the photogrammetry process, thereby providing a robust and certifiable uncertainty quantification across diverse scenes. We evaluate the framework using a variety of publicly available airborne and UAV imagery datasets. Results demonstrate that our method outperforms existing approaches by achieving high bounding rates without overestimating uncertainty.
Problem

Research questions and friction points this paper is trying to address.

Quantify uncertainty in photogrammetric point clouds
Address non-standardized error in Multi-view Stereo stage
Propose self-calibrating method for disparity uncertainty estimation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Error covariance matrix per point
Self-calibrating disparity uncertainty regression
Self-supervised reliable 3D points extraction
🔎 Similar Papers
No similar papers found.