🤖 AI Summary
This work proposes an end-to-end, stage-wise voxel-level deep reinforcement learning framework to address performance degradation in medical image segmentation caused by annotation noise. The method models annotation noise as a voxel-dependent problem and introduces a voxel-level asynchronous advantage actor-critic (vA3C) module, enabling each voxel to act as an independent agent for autonomous optimization. A composite reward function, combining the Dice coefficient with spatial continuity, guides the learning process, while a dynamic iterative update strategy facilitates noise-robust training without manual intervention. Evaluated on three public datasets, the proposed model achieves an average improvement of over 3% in both Dice and IoU metrics, significantly outperforming existing approaches.
📝 Abstract
Deep learning has achieved significant advancements in medical image segmentation. Currently, obtaining accurate segmentation outcomes is critically reliant on large-scale datasets with high-quality annotations. However, noisy annotations are frequently encountered owing to the complex morphological structures of organs in medical images and variations among different annotators, which can substantially limit the efficacy of segmentation models. Motivated by the fact that medical imaging annotator can correct labeling errors during segmentation based on prior knowledge, we propose an end-to-end Staged Voxel-Level Deep Reinforcement Learning (SVL-DRL) framework for robust medical image segmentation under noisy annotations. This framework employs a dynamic iterative update strategy to automatically mitigate the impact of erroneous labels without requiring manual intervention. The key advancements of SVL-DRL over existing works include: i) formulating noisy annotations as a voxel-dependent problem and addressing it through a novel staged reinforcement learning framework which guarantees robust model convergence; ii) incorporating a voxel-level asynchronous advantage actor-critic (vA3C) module that conceptualizes each voxel as an autonomous agent, which allows each agent to dynamically refine its own state representation during training, thereby directly mitigating the influence of erroneous labels; iii) designing a novel action space for the agents, along with a composite reward function that strategically combines the Dice value and a spatial continuity metric to significantly boost segmentation accuracy while maintain semantic integrity. Experiments on three public medical image datasets demonstrates State-of-The-Art (SoTA) performance under various experimental settings, with an average improvement of over 3\% in both Dice and IoU scores.