🤖 AI Summary
Accurate 3D cloud structure modeling is critical for improving weather and climate prediction, yet existing methods are hindered by scarce labeled data and the complexity of cloud microphysical processes. To address this, we propose SatMAE—a geospatially aware masked autoencoder—that pioneers the integration of positional encoding into the MAE framework. SatMAE is jointly trained on georeferenced geostationary satellite imagery (MSG/SEVIRI) and vertical cloud profiles from CloudSat/CPR radar. It first undergoes self-supervised pretraining on unlabeled imagery, followed by supervised fine-tuning on paired image–profile data. By alleviating reliance on dense manual annotations, SatMAE significantly outperforms conventional supervised baselines (e.g., U-Net) in 3D cloud reconstruction, reducing prediction error by 12.7%. The approach enables high spatiotemporal resolution, near-real-time 3D cloud field generation—directly supporting next-generation climate modeling systems.
📝 Abstract
Clouds play a key role in Earth's radiation balance with complex effects that introduce large uncertainties into climate models. Real-time 3D cloud data is essential for improving climate predictions. This study leverages geostationary imagery from MSG/SEVIRI and radar reflectivity measurements of cloud profiles from CloudSat/CPR to reconstruct 3D cloud structures. We first apply self-supervised learning (SSL) methods-Masked Autoencoders (MAE) and geospatially-aware SatMAE on unlabelled MSG images, and then fine-tune our models on matched image-profile pairs. Our approach outperforms state-of-the-art methods like U-Nets, and our geospatial encoding further improves prediction results, demonstrating the potential of SSL for cloud reconstruction.