🤖 AI Summary
To address the high computational cost, prolonged training time, strong data dependency, and poor decision transparency of mainstream CNNs in cervical cancer detection, this paper proposes S-Net—a lightweight, interpretable deep learning framework. Methodologically: (1) we design a parameter-efficient, inference-optimized network architecture; (2) integrate transfer learning while systematically characterizing negative transfer mechanisms and pixel-intensity bias in medical imagery; and (3) unify SHAP, LIME, and Grad-CAM for multi-granularity interpretability analysis. Evaluated on the Pap smear dataset, S-Net achieves 99.99% classification accuracy, 3.2× faster inference speed, and an 87% reduction in model parameters compared to ResNet and DenseNet baselines. The framework simultaneously delivers state-of-the-art accuracy, minimal resource requirements, and clinically meaningful interpretability—providing a deployable AI solution for low-resource cervical cancer screening.
📝 Abstract
Early and accurate detection through Pap smear analysis is critical to improving patient outcomes and reducing mortality of Cervical cancer. State-of-the-art (SOTA) Convolutional Neural Networks (CNNs) require substantial computational resources, extended training time, and large datasets. In this study, a lightweight CNN model, S-Net (Simple Net), is developed specifically for cervical cancer detection and classification using Pap smear images to address these limitations. Alongside S-Net, six SOTA CNNs were evaluated using transfer learning, including multi-path (DenseNet201, ResNet152), depth-based (Serasnet152), width-based multi-connection (Xception), depth-wise separable convolutions (MobileNetV2), and spatial exploitation-based (VGG19). All models, including S-Net, achieved comparable accuracy, with S-Net reaching 99.99%. However, S-Net significantly outperforms the SOTA CNNs in terms of computational efficiency and inference time, making it a more practical choice for real-time and resource-constrained applications. A major limitation in CNN-based medical diagnosis remains the lack of transparency in the decision-making process. To address this, Explainable AI (XAI) techniques, such as SHAP, LIME, and Grad-CAM, were employed to visualize and interpret the key image regions influencing model predictions. The novelty of this study lies in the development of a highly accurate yet computationally lightweight model (S-Net) caPable of rapid inference while maintaining interpretability through XAI integration. Furthermore, this work analyzes the behavior of SOTA CNNs, investigates the effects of negative transfer learning on Pap smear images, and examines pixel intensity patterns in correctly and incorrectly classified samples.