🤖 AI Summary
This work addresses the challenge of efficient environmental sound classification in resource-constrained settings or when large-scale pretraining data are unavailable. The authors propose a multi-acoustic feature stacking strategy that integrates Log-Mel spectrograms, MFCCs, GTCCs, Spectral Contrast, Chroma, and Tonnetz features, evaluated systematically with both CNN and Audio Spectrogram Transformer (AST) architectures on the ESC-50 and UrbanSound8K datasets. Experimental results demonstrate that, without large-scale pretraining, the feature-enhanced CNN significantly outperforms AST in both data and computational efficiency. This approach offers a lightweight and effective alternative for deploying environmental sound classification on edge devices, effectively mitigating the heavy reliance of transformer-based models on extensive pretraining.
📝 Abstract
Environmental sound classification (ESC) has gained significant attention due to its diverse applications in smart city monitoring, fault detection, acoustic surveillance, and manufacturing quality control. To enhance CNN performance, feature stacking techniques have been explored to aggregate complementary acoustic descriptors into richer input representations. In this paper, we investigate CNN-based models employing various stacked feature combinations, including Log-Mel Spectrogram (LM), Spectral Contrast (SPC), Chroma (CH), Tonnetz (TZ), Mel-Frequency Cepstral Coefficients (MFCCs), and Gammatone Cepstral Coefficients (GTCC). Experiments are conducted on the widely used ESC-50 and UrbanSound8K datasets under different training regimes, including pretraining on ESC-50, fine-tuning on UrbanSound8K, and comparison with Audio Spectrogram Transformer (AST) models pretrained on large-scale corpora such as AudioSet. This experimental design enables an analysis of how feature-stacked CNNs compare with transformer-based models under varying levels of training data and pretraining diversity. The results indicate that feature-stacked CNNs offer a more computationally and data-efficient alternative when large-scale pretraining or extensive training data are unavailable, making them particularly well suited for resource-constrained and edge-level sound classification scenarios.