🤖 AI Summary
How to construct music representations aligned with human auditory perception hierarchies? This paper proposes a noise-augmented autoencoder framework that injects structured noise into the encoder and incorporates a psychoacoustically grounded perceptual loss function, thereby inducing spontaneous emergence of perceptual hierarchy in the latent space—high-level semantic features (e.g., pitch surprisal) become more salient in low-dimensional representations. A latent diffusion model is further integrated to enhance decoding fidelity. The method achieves significant improvements in pitch surprisal modeling accuracy and prediction of EEG neural responses, setting new state-of-the-art performance across multiple benchmark tasks. Pretrained weights are publicly released, establishing a novel paradigm for perception-driven music representation learning.
📝 Abstract
We argue that training autoencoders to reconstruct inputs from noised versions of their encodings, when combined with perceptual losses, yields encodings that are structured according to a perceptual hierarchy. We demonstrate the emergence of this hierarchical structure by showing that, after training an audio autoencoder in this manner, perceptually salient information is captured in coarser representation structures than with conventional training. Furthermore, we show that such perceptual hierarchies improve latent diffusion decoding in the context of estimating surprisal in music pitches and predicting EEG-brain responses to music listening. Pretrained weights are available on github.com/CPJKU/pa-audioic.