Explainability-Aware Evaluation of Transfer Learning Models for IoT DDoS Detection Under Resource Constraints

📅 2026-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the lack of comprehensive evaluation integrating reliability, efficiency, and interpretability in DDoS attack detection for resource-constrained IoT environments. Leveraging the CICDDoS2019 dataset, network traffic is transformed into image representations, and seven pre-trained convolutional neural network (CNN) models are systematically assessed for multi-class DDoS detection performance. The work innovatively combines interpretability techniques—Grad-CAM and SHAP—with robust statistical metrics including Matthews Correlation Coefficient (MCC), Youden’s index, and confidence intervals to conduct a multidimensional analysis across performance, inference latency, training cost, and explanation consistency. Results demonstrate that DenseNet169 achieves the best trade-off in reliability and interpretability, while MobileNetV3 offers the optimal balance between latency and accuracy for deployment on fog nodes, thereby providing a solid empirical foundation for applying transfer learning models in real-world IoT security scenarios.

Technology Category

Application Category

📝 Abstract
Distributed denial-of-service (DDoS) attacks threaten the availability of Internet of Things (IoT) infrastructures, particularly under resource-constrained deployment conditions. Although transfer learning models have shown promising detection accuracy, their reliability, computational feasibility, and interpretability in operational environments remain insufficiently explored. This study presents an explainability-aware empirical evaluation of seven pre-trained convolutional neural network architectures for multi-class IoT DDoS detection using the CICDDoS2019 dataset and an image-based traffic representation. The analysis integrates performance metrics, reliability-oriented statistics (MCC, Youden Index, confidence intervals), latency and training cost assessment, and interpretability evaluation using Grad-CAM and SHAP. Results indicate that DenseNet and MobileNet-based architectures achieve strong detection performance while demonstrating superior reliability and compact, class-consistent attribution patterns. DenseNet169 offers the strongest reliability and interpretability alignment, whereas MobileNetV3 provides an effective latency-accuracy trade-off for fog-level deployment. The findings emphasize the importance of combining performance, reliability, and explainability criteria when selecting deep learning models for IoT DDoS detection.
Problem

Research questions and friction points this paper is trying to address.

Transfer Learning
IoT DDoS Detection
Explainability
Resource Constraints
Model Evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explainability-aware evaluation
Transfer learning
IoT DDoS detection
Grad-CAM and SHAP
Resource-constrained deployment