π€ AI Summary
To address the degradation in canopy cover segmentation accuracy caused by cross-modal uncertainty arising from asynchronous acquisition of high-resolution multimodal remote sensing imagery (optical/LiDAR/SAR), this paper proposes MURTreeFormer. The method tackles temporal misalignment-induced stochastic uncertainty through three key innovations: (1) patch-level probabilistic latent variable modeling to explicitly characterize auxiliary modality uncertainty; (2) a VAE-based resampling mechanism that reconstructs features for uncertain regions by sampling from the primary modalityβs learned distribution; and (3) a gradient magnitude attention (GMA) module to enhance tree-structure awareness, coupled with a lightweight refinement head (RH) for improved spatial detail recovery. Experiments on multimodal datasets from Shanghai and Zurich demonstrate that MURTreeFormer significantly outperforms state-of-the-art single- and multimodal methods, effectively mitigating temporal inconsistency-induced uncertainty and achieving superior segmentation robustness and accuracy.
π Abstract
Recent advances in semantic segmentation of multi-modal remote sensing images have significantly improved the accuracy of tree cover mapping, supporting applications in urban planning, forest monitoring, and ecological assessment. Integrating data from multiple modalities-such as optical imagery, light detection and ranging (LiDAR), and synthetic aperture radar (SAR)-has shown superior performance over single-modality methods. However, these data are often acquired days or even months apart, during which various changes may occur, such as vegetation disturbances (e.g., logging, and wildfires) and variations in imaging quality. Such temporal misalignments introduce cross-modal uncertainty, especially in high-resolution imagery, which can severely degrade segmentation accuracy. To address this challenge, we propose MURTreeFormer, a novel multi-modal segmentation framework that mitigates and leverages aleatoric uncertainty for robust tree cover mapping. MURTreeFormer treats one modality as primary and others as auxiliary, explicitly modeling patch-level uncertainty in the auxiliary modalities via a probabilistic latent representation. Uncertain patches are identified and reconstructed from the primary modality's distribution through a VAE-based resampling mechanism, producing enhanced auxiliary features for fusion. In the decoder, a gradient magnitude attention (GMA) module and a lightweight refinement head (RH) are further integrated to guide attention toward tree-like structures and to preserve fine-grained spatial details. Extensive experiments on multi-modal datasets from Shanghai and Zurich demonstrate that MURTreeFormer significantly improves segmentation performance and effectively reduces the impact of temporally induced aleatoric uncertainty.