🤖 AI Summary
To address insufficient segmentation accuracy on low-quality medical images—characterized by blurred boundaries and low contrast—this paper proposes CFFormer, a hybrid CNN-Transformer architecture. It employs dual encoders to jointly model local details and global semantics, and introduces the Channel-wise Cross-network Attention (CFCA) mechanism—the first of its kind—to enable dynamic channel-level feature interaction between the two pathways. Additionally, we design the X-Spatial Feature Fusion (XFF) module, which performs multi-scale spatial alignment to mitigate cross-modality semantic discrepancies. Evaluated on eight benchmark datasets spanning five imaging modalities, CFFormer consistently outperforms state-of-the-art methods. Notably, it achieves substantial improvements in boundary localization accuracy and segmentation robustness, particularly on low-quality images.
📝 Abstract
Hybrid CNN-Transformer models are designed to combine the advantages of Convolutional Neural Networks (CNNs) and Transformers to efficiently model both local information and long-range dependencies. However, most research tends to focus on integrating the spatial features of CNNs and Transformers, while overlooking the critical importance of channel features. This is particularly significant for model performance in low-quality medical image segmentation. Effective channel feature extraction can significantly enhance the model's ability to capture contextual information and improve its representation capabilities. To address this issue, we propose a hybrid CNN-Transformer model, CFFormer, and introduce two modules: the Cross Feature Channel Attention (CFCA) module and the X-Spatial Feature Fusion (XFF) module. The model incorporates dual encoders, with the CNN encoder focusing on capturing local features and the Transformer encoder modeling global features. The CFCA module filters and facilitates interactions between the channel features from the two encoders, while the XFF module effectively reduces the significant semantic information differences in spatial features, enabling a smooth and cohesive spatial feature fusion. We evaluate our model across eight datasets covering five modalities to test its generalization capability. Experimental results demonstrate that our model outperforms current state-of-the-art (SOTA) methods, with particularly superior performance on datasets characterized by blurry boundaries and low contrast.