🤖 AI Summary
This work addresses the instability in medical image segmentation caused by noisy automatic quality assessment signals during preference-based fine-tuning, which can lead to detrimental updates. To mitigate this issue, the authors propose Region-Normalized Direct Preference Optimization (RN-DPO), a method that generates candidate masks from limited annotated data and constructs preference pairs using weak supervision signals—such as quality scores or model uncertainty—without requiring pixel-level annotations. Crucially, RN-DPO normalizes preference updates according to the size of mask disagreement regions, thereby suppressing noise-induced perturbations. Experiments on two medical imaging datasets demonstrate that RN-DPO significantly outperforms standard DPO and strong baselines, achieving more stable and consistent segmentation performance improvements without any additional pixel-level labeling.
📝 Abstract
While dense pixel-wise annotations remain the gold standard for medical image segmentation, they are costly to obtain and limit scalability. In contrast, many deployed systems already produce inexpensive automatic quality-control (QC) signals like model agreement, uncertainty measures, or learned mask-quality scores which can be used for further model training without additional ground-truth annotation. However, these signals can be noisy and biased, making preference-based fine-tuning susceptible to harmful updates. We study Direct Preference Optimization (DPO) for segmentation from such noisy judges using proposals generated by a supervised base segmenter trained on a small labeled set. We find that outcomes depend strongly on how preference pairs are mined: selecting the judge's top-ranked proposal can improve peak performance when the judge is reliable, but can amplify harmful errors under weaker judges. We propose Region-Normalized DPO (RN-DPO), a segmentation-aware objective which normalizes preference updates by the size of the disagreement region between masks, reducing the leverage of harmful comparisons and improving optimization stability. Across two medical datasets and multiple regimes, RN-DPO improves sustained performance and stabilizes preference-based fine-tuning, outperforming standard DPO and strong baselines without requiring additional pixel annotations.