🤖 AI Summary
To address the accuracy degradation in small tumor segmentation in medical images—caused by minute target sizes and irregular boundaries—this paper proposes the Adaptive Focusing Loss (A-FL), the first loss function incorporating a dual dynamic adjustment mechanism that jointly considers surface smoothness, object size, and regional area ratio, thereby overcoming the static parameter limitation of conventional Focal Loss. Integrated with a ResNet50-U-Net architecture, A-FL employs surface-smoothness-aware weighting and area-ratio-driven class balancing. On the PI-CAI 2022 dataset, A-FL achieves an IoU of 0.696 (+5.5% over standard Focal Loss) and a Dice score of 0.769; on BraTS 2018, it attains a Dice score of 0.931, outperforming Dice Loss, Focal Loss, and their combinations. The method significantly enhances robustness and accuracy for segmenting small and irregular lesions.
📝 Abstract
Deep learning has achieved outstanding accuracy in medical image segmentation, particularly for objects like organs or tumors with smooth boundaries or large sizes. Whereas, it encounters significant difficulties with objects that have zigzag boundaries or are small in size, leading to a notable decrease in segmentation effectiveness. In this context, using a loss function that incorporates smoothness and volume information into a model's predictions offers a promising solution to these shortcomings. In this work, we introduce an Adaptive Focal Loss (A-FL) function designed to mitigate class imbalance by down-weighting the loss for easy examples that results in up-weighting the loss for hard examples and giving greater emphasis to challenging examples, such as small and irregularly shaped objects. The proposed A-FL involves dynamically adjusting a focusing parameter based on an object's surface smoothness, size information, and adjusting the class balancing parameter based on the ratio of targeted area to total area in an image. We evaluated the performance of the A-FL using ResNet50-encoded U-Net architecture on the Picai 2022 and BraTS 2018 datasets. On the Picai 2022 dataset, the A-FL achieved an Intersection over Union (IoU) of 0.696 and a Dice Similarity Coefficient (DSC) of 0.769, outperforming the regular Focal Loss (FL) by 5.5% and 5.4% respectively. It also surpassed the best baseline Dice-Focal by 2.0% and 1.2%. On the BraTS 2018 dataset, A-FL achieved an IoU of 0.883 and a DSC of 0.931. The comparative studies show that the proposed A-FL function surpasses conventional methods, including Dice Loss, Focal Loss, and their hybrid variants, in IoU, DSC, Sensitivity, and Specificity metrics. This work highlights A-FL's potential to improve deep learning models for segmenting clinically significant regions in medical images, leading to more precise and reliable diagnostic tools.