Multi-modal Uncertainty Robust Tree Cover Segmentation For High-Resolution Remote Sensing Images

πŸ“… 2025-09-05
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the degradation in canopy cover segmentation accuracy caused by cross-modal uncertainty arising from asynchronous acquisition of high-resolution multimodal remote sensing imagery (optical/LiDAR/SAR), this paper proposes MURTreeFormer. The method tackles temporal misalignment-induced stochastic uncertainty through three key innovations: (1) patch-level probabilistic latent variable modeling to explicitly characterize auxiliary modality uncertainty; (2) a VAE-based resampling mechanism that reconstructs features for uncertain regions by sampling from the primary modality’s learned distribution; and (3) a gradient magnitude attention (GMA) module to enhance tree-structure awareness, coupled with a lightweight refinement head (RH) for improved spatial detail recovery. Experiments on multimodal datasets from Shanghai and Zurich demonstrate that MURTreeFormer significantly outperforms state-of-the-art single- and multimodal methods, effectively mitigating temporal inconsistency-induced uncertainty and achieving superior segmentation robustness and accuracy.

Technology Category

Application Category

πŸ“ Abstract
Recent advances in semantic segmentation of multi-modal remote sensing images have significantly improved the accuracy of tree cover mapping, supporting applications in urban planning, forest monitoring, and ecological assessment. Integrating data from multiple modalities-such as optical imagery, light detection and ranging (LiDAR), and synthetic aperture radar (SAR)-has shown superior performance over single-modality methods. However, these data are often acquired days or even months apart, during which various changes may occur, such as vegetation disturbances (e.g., logging, and wildfires) and variations in imaging quality. Such temporal misalignments introduce cross-modal uncertainty, especially in high-resolution imagery, which can severely degrade segmentation accuracy. To address this challenge, we propose MURTreeFormer, a novel multi-modal segmentation framework that mitigates and leverages aleatoric uncertainty for robust tree cover mapping. MURTreeFormer treats one modality as primary and others as auxiliary, explicitly modeling patch-level uncertainty in the auxiliary modalities via a probabilistic latent representation. Uncertain patches are identified and reconstructed from the primary modality's distribution through a VAE-based resampling mechanism, producing enhanced auxiliary features for fusion. In the decoder, a gradient magnitude attention (GMA) module and a lightweight refinement head (RH) are further integrated to guide attention toward tree-like structures and to preserve fine-grained spatial details. Extensive experiments on multi-modal datasets from Shanghai and Zurich demonstrate that MURTreeFormer significantly improves segmentation performance and effectively reduces the impact of temporally induced aleatoric uncertainty.
Problem

Research questions and friction points this paper is trying to address.

Addresses cross-modal uncertainty in multi-modal remote sensing images
Mitigates temporal misalignment effects on tree cover segmentation
Improves robustness against aleatoric uncertainty in high-resolution imagery
Innovation

Methods, ideas, or system contributions that make the work stand out.

Probabilistic latent modeling for cross-modal uncertainty
VAE-based resampling for auxiliary feature enhancement
Gradient attention and refinement for spatial details
πŸ”Ž Similar Papers
No similar papers found.
Y
Yuanyuan Gui
School of Information and Electronics, Beijing Institute of Technology, and the National Key Laboratory of Science and Technology on Space-Born Intelligent Information Processing, Beijing 100081, China, and also with Beijing Institute of Technology, Zhuhai, Guangdong 519088, China
W
Wei Li
School of Information and Electronics, Beijing Institute of Technology, and the National Key Laboratory of Science and Technology on Space-Born Intelligent Information Processing, Beijing 100081, China, and also with Beijing Institute of Technology, Zhuhai, Guangdong 519088, China
Y
Yinjian Wang
School of Information and Electronics, Beijing Institute of Technology, and the National Key Laboratory of Science and Technology on Space-Born Intelligent Information Processing, Beijing 100081, China, and also with Beijing Institute of Technology, Zhuhai, Guangdong 519088, China
Xiang-Gen Xia
Xiang-Gen Xia
Department of Electrical and Computer Engineering, University of Delaware, Newark, DE 19716, USA
signal processingdigital communicationsradar signal processing
M
Mauro Marty
Swiss Federal Institute for Forest, Snow, and Landscape Research WSL, CH-8903, Birmensdorf, Switzerland
Christian Ginzler
Christian Ginzler
Swiss Federal Institute for Forest, Snow and Landscape Research WSL
remote sensinglandscape change science3D mapping
Zuyuan Wang
Zuyuan Wang
Swiss Federal Institute for Forest, Snow, and Landscape Research WSL, CH-8903, Birmensdorf, Switzerland