🤖 AI Summary
This study systematically investigates the impact of hyperparameter combinations on the performance of deep convolutional neural networks (DCNNs) in binary classification of crack images. A lightweight DCNN architecture—comprising two convolutional layers, two pooling layers, one Dropout layer, and one fully connected layer—is designed and evaluated quantitatively on a balanced dataset of positive (crack) and negative (non-crack) images. The synergistic effects of three key hyperparameters are analyzed: pooling strategy (MaxPooling vs. AveragePooling), activation function (tanh vs. ReLU), and optimizer (Adam vs. SGD). Empirical results reveal, for the first time, that the combination MaxPooling + tanh + Adam yields statistically significant improvements in classification accuracy—outperforming the second-best configuration by a notable margin. This finding establishes a reproducible, computationally efficient hyperparameter tuning paradigm tailored to industrial defect recognition tasks, thereby addressing a critical gap in the quantitative, systematic analysis of hyperparameters for crack detection.
📝 Abstract
The performance of a classifier depends on the tuning of its parame ters. In this paper, we have experimented the impact of various tuning parameters on the performance of a deep convolutional neural network (DCNN). In the ex perimental evaluation, we have considered a DCNN classifier that consists of 2 convolutional layers (CL), 2 pooling layers (PL), 1 dropout, and a dense layer. To observe the impact of pooling, activation function, and optimizer tuning pa rameters, we utilized a crack image dataset having two classes: negative and pos itive. The experimental results demonstrate that with the maxpooling, the DCNN demonstrates its better performance for adam optimizer and tanh activation func tion.