🤖 AI Summary
This study addresses the lack of systematic understanding regarding the trade-off between computational efficiency and model accuracy in convolutional neural networks (CNNs) under distributed training settings. It presents the first comprehensive analysis of how different CNN architectures and data augmentation strategies jointly influence model accuracy and resource consumption in such scenarios. Through extensive comparative experiments, the work evaluates the performance and computational overhead of various architecture–augmentation combinations. The findings reveal that specific pairings can significantly enhance training efficiency while preserving model accuracy, offering both theoretical insights and practical guidance for deploying efficient models in resource-constrained environments.
📝 Abstract
Convolutional Neural Networks (CNNs) have proven to be highly effective in solving a broad spectrum of computer vision tasks, such as classification, identification, and segmentation. These methods can be deployed in both centralized and distributed environments, depending on the computational demands of the task. While much of the literature has focused on the explainability of CNNs, which is essential for building trust and confidence in their predictions, there remains a gap in understanding their impact on computational resources, particularly in distributed training contexts. In this study, we analyze how CNN architectures primarily influence model accuracy and investigate additional factors that affect computational efficiency in distributed systems. Our findings contribute valuable insights for optimizing the deployment of CNNs in resource-intensive scenarios, paving the way for further exploration of variables critical to distributed learning.