Performance Comparison of CNN and AST Models with Stacked Features for Environmental Sound Classification

📅 2026-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of efficient environmental sound classification in resource-constrained settings or when large-scale pretraining data are unavailable. The authors propose a multi-acoustic feature stacking strategy that integrates Log-Mel spectrograms, MFCCs, GTCCs, Spectral Contrast, Chroma, and Tonnetz features, evaluated systematically with both CNN and Audio Spectrogram Transformer (AST) architectures on the ESC-50 and UrbanSound8K datasets. Experimental results demonstrate that, without large-scale pretraining, the feature-enhanced CNN significantly outperforms AST in both data and computational efficiency. This approach offers a lightweight and effective alternative for deploying environmental sound classification on edge devices, effectively mitigating the heavy reliance of transformer-based models on extensive pretraining.

Technology Category

Application Category

📝 Abstract
Environmental sound classification (ESC) has gained significant attention due to its diverse applications in smart city monitoring, fault detection, acoustic surveillance, and manufacturing quality control. To enhance CNN performance, feature stacking techniques have been explored to aggregate complementary acoustic descriptors into richer input representations. In this paper, we investigate CNN-based models employing various stacked feature combinations, including Log-Mel Spectrogram (LM), Spectral Contrast (SPC), Chroma (CH), Tonnetz (TZ), Mel-Frequency Cepstral Coefficients (MFCCs), and Gammatone Cepstral Coefficients (GTCC). Experiments are conducted on the widely used ESC-50 and UrbanSound8K datasets under different training regimes, including pretraining on ESC-50, fine-tuning on UrbanSound8K, and comparison with Audio Spectrogram Transformer (AST) models pretrained on large-scale corpora such as AudioSet. This experimental design enables an analysis of how feature-stacked CNNs compare with transformer-based models under varying levels of training data and pretraining diversity. The results indicate that feature-stacked CNNs offer a more computationally and data-efficient alternative when large-scale pretraining or extensive training data are unavailable, making them particularly well suited for resource-constrained and edge-level sound classification scenarios.
Problem

Research questions and friction points this paper is trying to address.

Environmental Sound Classification
Feature Stacking
CNN
Audio Spectrogram Transformer
Resource-Constrained Scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

feature stacking
environmental sound classification
CNN
Audio Spectrogram Transformer
data efficiency
P
Parinaz Binandeh Dehaghani
SYSTEC-ARISE, Faculty of Engineering University of Porto, Portugal
D
Danilo Pena
ResoSight, Montreal, Canada
A. Pedro Aguiar
A. Pedro Aguiar
Professor of Electrical and Computer Engineering, Faculty of Engineering, University of Porto
Control Theory and ApplicationsSignals and SystemsControl SystemsRoboticsAutonomous Vehicles