CFFormer: Cross CNN-Transformer Channel Attention and Spatial Feature Fusion for Improved Segmentation of Low Quality Medical Images

📅 2025-01-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address insufficient segmentation accuracy on low-quality medical images—characterized by blurred boundaries and low contrast—this paper proposes CFFormer, a hybrid CNN-Transformer architecture. It employs dual encoders to jointly model local details and global semantics, and introduces the Channel-wise Cross-network Attention (CFCA) mechanism—the first of its kind—to enable dynamic channel-level feature interaction between the two pathways. Additionally, we design the X-Spatial Feature Fusion (XFF) module, which performs multi-scale spatial alignment to mitigate cross-modality semantic discrepancies. Evaluated on eight benchmark datasets spanning five imaging modalities, CFFormer consistently outperforms state-of-the-art methods. Notably, it achieves substantial improvements in boundary localization accuracy and segmentation robustness, particularly on low-quality images.

Technology Category

Application Category

📝 Abstract
Hybrid CNN-Transformer models are designed to combine the advantages of Convolutional Neural Networks (CNNs) and Transformers to efficiently model both local information and long-range dependencies. However, most research tends to focus on integrating the spatial features of CNNs and Transformers, while overlooking the critical importance of channel features. This is particularly significant for model performance in low-quality medical image segmentation. Effective channel feature extraction can significantly enhance the model's ability to capture contextual information and improve its representation capabilities. To address this issue, we propose a hybrid CNN-Transformer model, CFFormer, and introduce two modules: the Cross Feature Channel Attention (CFCA) module and the X-Spatial Feature Fusion (XFF) module. The model incorporates dual encoders, with the CNN encoder focusing on capturing local features and the Transformer encoder modeling global features. The CFCA module filters and facilitates interactions between the channel features from the two encoders, while the XFF module effectively reduces the significant semantic information differences in spatial features, enabling a smooth and cohesive spatial feature fusion. We evaluate our model across eight datasets covering five modalities to test its generalization capability. Experimental results demonstrate that our model outperforms current state-of-the-art (SOTA) methods, with particularly superior performance on datasets characterized by blurry boundaries and low contrast.
Problem

Research questions and friction points this paper is trying to address.

Medical Image Segmentation
Low Quality Images
Color and Texture Processing
Innovation

Methods, ideas, or system contributions that make the work stand out.

CFFormer
Cross-Feature Channel Attention (CFCA)
X-Space Feature Fusion (XFF)
🔎 Similar Papers
No similar papers found.
J
Jiaxuan Li
School of Computer Science, University of Nottingham Ninbo China, 199 Taikang East Road, Ningbo, Zhejiang, 315100, China
Q
Qing Xu
School of Computer Science, University of Nottingham Ninbo China, 199 Taikang East Road, Ningbo, Zhejiang, 315100, China
Xiangjian He
Xiangjian He
University of Nottingham Ningbo China (2022.5--), University of Technology Sydney (1999.2-2022.5)
Computer VisionMachine LearningData Analytics
Z
Ziyu Liu
School of Computer Science, University of Nottingham Ninbo China, 199 Taikang East Road, Ningbo, Zhejiang, 315100, China; Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, China, Shenzhen, Guangdong, 518055, China
Daokun Zhang
Daokun Zhang
University of Nottingham Ningbo China
Graph LearningData MiningMachine Learning
R
Ruili Wang
School of Mathematical and Computational Sciences, Massey University, Auckland, New Zealand
Rong Qu
Rong Qu
University of Nottingham
Hyper-heuristicsVehicle RoutingAutomated Algorithm DesignCombinatorial Optimisation
Guoping Qiu
Guoping Qiu
Professor of Computer Science, University of Nottingham
image processingpattern recognitionmultimediacomputer vision