🤖 AI Summary
To address semantic ambiguity and boundary misclassification in pleural effusion CT image segmentation—caused by intensity similarity, ill-defined boundaries, and morphological variability—this paper proposes DBIF-AUNet. The model introduces a Dual-Domain Feature Decoupling module (DDFD) to disentangle anatomical and pathological feature representations, and a Branch-Interactive Attention Fusion mechanism (BIAF) to enable dynamic complementary integration across domains. Additionally, densely nested skip connections and a hierarchical adaptive hybrid loss function are incorporated to strengthen deep supervision. Evaluated on 1,622 clinical CT scans, DBIF-AUNet achieves an IoU of 80.1% and a Dice score of 89.0%, significantly outperforming U-Net++ and Swin-UNet. The proposed method substantially improves segmentation accuracy for complex pleural effusion lesions and enhances clinical applicability.
📝 Abstract
Pleural effusion semantic segmentation can significantly enhance the accuracy and timeliness of clinical diagnosis and treatment by precisely identifying disease severity and lesion areas. Currently, semantic segmentation of pleural effusion CT images faces multiple challenges. These include similar gray levels between effusion and surrounding tissues, blurred edges, and variable morphology. Existing methods often struggle with diverse image variations and complex edges, primarily because direct feature concatenation causes semantic gaps. To address these challenges, we propose the Dual-Branch Interactive Fusion Attention model (DBIF-AUNet). This model constructs a densely nested skip-connection network and innovatively refines the Dual-Domain Feature Disentanglement module (DDFD). The DDFD module orthogonally decouples the functions of dual-domain modules to achieve multi-scale feature complementarity and enhance characteristics at different levels. Concurrently, we design a Branch Interaction Attention Fusion module (BIAF) that works synergistically with the DDFD. This module dynamically weights and fuses global, local, and frequency band features, thereby improving segmentation robustness. Furthermore, we implement a nested deep supervision mechanism with hierarchical adaptive hybrid loss to effectively address class imbalance. Through validation on 1,622 pleural effusion CT images from Southwest Hospital, DBIF-AUNet achieved IoU and Dice scores of 80.1% and 89.0% respectively. These results outperform state-of-the-art medical image segmentation models U-Net++ and Swin-UNet by 5.7%/2.7% and 2.2%/1.5% respectively, demonstrating significant optimization in segmentation accuracy for complex pleural effusion CT images.