🤖 AI Summary
To address overconfidence and poor uncertainty quantification in radiographic image classification—leading to diagnostic errors—this paper proposes a dual-path cross-attention fusion framework that dynamically integrates multi-scale features from EfficientNet-B4 and ResNet-34. We introduce a novel bidirectional cross-attention mechanism to jointly model fine-grained channel-wise and spatial dependencies. To our knowledge, this is the first work to unify multi-network contextual modeling with entropy-based uncertainty visualization within a single medical image classification pipeline. Evaluated on four public datasets—COVID-19, tuberculosis, pneumonia X-ray, and retinal OCT—we achieve AUC scores of 99.75%–100% and AUPR scores of 96.36%–100%. The method significantly improves both classification accuracy and decision interpretability, while enhancing model reliability through calibrated uncertainty estimation.
📝 Abstract
Accurate and reliable image classification is crucial in radiology, where diagnostic decisions significantly impact patient outcomes. Conventional deep learning models tend to produce overconfident predictions despite underlying uncertainties, potentially leading to misdiagnoses. Attention mechanisms have emerged as powerful tools in deep learning, enabling models to focus on relevant parts of the input data. Combined with feature fusion, they can be effective in addressing uncertainty challenges. Cross-attention has become increasingly important in medical image analysis for capturing dependencies across features and modalities. This paper proposes a novel dual cross-attention fusion model for medical image analysis by addressing key challenges in feature integration and interpretability. Our approach introduces a bidirectional cross-attention mechanism with refined channel and spatial attention that dynamically fuses feature maps from EfficientNetB4 and ResNet34 leveraging multi-network contextual dependencies. The refined features through channel and spatial attention highlights discriminative patterns crucial for accurate classification. The proposed model achieved AUC of 99.75%, 100%, 99.93% and 98.69% and AUPR of 99.81%, 100%, 99.97%, and 96.36% on Covid-19, Tuberculosis, Pneumonia Chest X-ray images and Retinal OCT images respectively. The entropy values and several high uncertain samples give an interpretable visualization from the model enhancing transparency. By combining multi-scale feature extraction, bidirectional attention and uncertainty estimation, our proposed model strongly impacts medical image analysis.