🤖 AI Summary
This work addresses the challenges of semi-supervised medical image segmentation under conditions of scarce annotations, high inter-annotator variability, and insufficient multi-scale feature fusion, where existing methods suffer significant performance degradation on small structures and boundary regions. To this end, we propose SASNet, a dual-branch architecture that effectively integrates low-level and high-level features through three key innovations: a scale-adaptive reweighting strategy, a 3D Fourier-domain view variation augmentation mechanism, and a sign-distance-map-based consistency learning framework that jointly models spatial, temporal, and geometric consistency between segmentation and regression tasks. Extensive experiments demonstrate that our method substantially outperforms current semi-supervised approaches on the LA, Pancreas-CT, and BraTS datasets, achieving performance close to fully supervised baselines, with notable improvements in the accuracy of small lesion and boundary delineation.