🤖 AI Summary
To address the performance degradation of semantic segmentation on unlabeled real-world 360° panoramic images due to scarce annotations, this paper proposes an unsupervised multi-source-to-target domain adaptation framework to enhance model generalization. Methodologically, we pioneer the integration of cross-domain adaptation into panoramic segmentation, introducing three core components: (i) multi-source feature disentanglement via adversarial training, (ii) a panoramic spatial attention mechanism to preserve global structural coherence, and (iii) self-supervised depth-geometry consistency regularization to enforce geometric plausibility across domains. These jointly achieve both multi-source feature alignment and panoramic structure-aware consistency. Evaluated on PanoSUN and Stanford2D3D benchmarks, our method achieves an absolute mIoU improvement of 8.2% over state-of-the-art single-source and conventional multi-source adaptation approaches, demonstrating the effectiveness and advancement of the proposed panoramic structure-aware transfer paradigm.