🤖 AI Summary
In medical image segmentation, ambiguous annotations and pervasive noisy labels arise from ill-defined lesion boundaries, irregular morphologies, and subtle tissue density variations; existing models suffer from unstable feature learning due to neglecting inter-label quality discrepancies. To address this, we propose Data-Driven Alternating Learning (DALE): a novel paradigm that first models label quality as learnable confidence parameters, jointly enforcing loss consistency and dynamic confidence-weighted optimization. DALE further incorporates feature-level distribution alignment—via MMD or adversarial learning—and multi-scale stability regularization, co-adapting representation distributions within an alternating optimization framework. We provide theoretical guarantees on its robustness against label noise. DALE is architecture-agnostic and plug-and-play. Evaluated across multiple lesion segmentation benchmarks, it achieves an average Dice improvement of 7.16%, significantly enhancing model robustness and cross-domain generalization.
📝 Abstract
Deep learning has achieved significant advancements in medical image segmentation, but existing models still face challenges in accurately segmenting lesion regions. The main reason is that some lesion regions in medical images have unclear boundaries, irregular shapes, and small tissue density differences, leading to label ambiguity. However, the existing model treats all data equally without taking quality differences into account in the training process, resulting in noisy labels negatively impacting model training and unstable feature representations. In this paper, a data-driven alternating learning (DALE) paradigm is proposed to optimize the model's training process, achieving stable and high-precision segmentation. The paradigm focuses on two key points: (1) reducing the impact of noisy labels, and (2) calibrating unstable representations. To mitigate the negative impact of noisy labels, a loss consistency-based collaborative optimization method is proposed, and its effectiveness is theoretically demonstrated. Specifically, the label confidence parameters are introduced to dynamically adjust the influence of labels of different confidence levels during model training, thus reducing the influence of noise labels. To calibrate the learning bias of unstable representations, a distribution alignment method is proposed. This method restores the underlying distribution of unstable representations, thereby enhancing the discriminative capability of fuzzy region representations. Extensive experiments on various benchmarks and model backbones demonstrate the superiority of the DALE paradigm, achieving an average performance improvement of up to 7.16%.