đ¤ AI Summary
To address the limited representational capacity of audio Masked Autoencoders (MAEs) in classification and speech tasks, this paper proposes AudioMAE++, which enhances the standard MAE framework with macaron-style Transformer layers and SwiGLU feed-forward networks to improve modeling of Mel-spectrogram features. The model employs block-wise masking and large-scale self-supervised pretraining on AudioSet. After pretraining, AudioMAE++ achieves state-of-the-art performance across ten diverse downstream tasksâincluding audio classification, automatic speech recognition, and acoustic event detectionâoutperforming existing MAE-based approaches. Notably, when scaled to four times the baseline parameter count, it delivers substantial gains, setting new SOTA results on multiple benchmarks. The core contribution lies in the unification of architectural innovation with efficient audio representation learning: it demonstrates that lightweight structural modificationsânamely, macaron-style stacking and SwiGLU activationâcritically enhance audio MAE performance while maintaining excellent scalability.
đ Abstract
Masked Autoencoders (MAEs) trained on audio spectrogram patches have emerged as a prominent approach for learning self-supervised audio representations. While several recent papers have evaluated key aspects of training MAEs on audio data, the majority of these approaches still leverage vanilla transformer building blocks, whereas the transformer community has seen steady integration of newer architectural advancements. In this work, we propose AudioMAE++, a revamped audio masked autoencoder with two such enhancements, namely macaron-style transformer blocks with gated linear units. When pretrained on the AudioSet dataset, the proposed AudioMAE++ models outperform existing MAE based approaches on 10 diverse downstream tasks, demonstrating excellent performance on audio classification and speech-based benchmarks. The proposed AudioMAE++ models also demonstrate excellent scaling characteristics, outperforming directly comparable standard MAE baselines with up to 4x more parameters.