🤖 AI Summary
This study addresses the challenge of accurately estimating musical audio surprisal—the degree to which an auditory event violates listener expectations—by leveraging information content (IC) encoded in the noise space of autoregressive diffusion models (ADMs). We propose a novel IC computation framework grounded in two classes of diffusion ordinary differential equations, mapping negative log-likelihoods across noise scales to multi-granular musical perception: low-noise regimes capture monophonic pitch surprisal, while high-noise regimes support boundary detection in polyphonic audio segments. Compared to state-of-the-art generative infinite-vocabulary transformers (GIVTs), our approach achieves comparable or superior performance on both tasks, with significant accuracy gains at specific noise scales. The key contribution is the first systematic exploitation of the gradual denoising property of diffusion processes to model musical expectation violation, establishing a new paradigm for computational music cognition and generative audio analysis.
📝 Abstract
Recently, the information content (IC) of predictions from a Generative Infinite-Vocabulary Transformer (GIVT) has been used to model musical expectancy and surprisal in audio. We investigate the effectiveness of such modelling using IC calculated with autoregressive diffusion models (ADMs). We empirically show that IC estimates of models based on two different diffusion ordinary differential equations (ODEs) describe diverse data better, in terms of negative log-likelihood, than a GIVT. We evaluate diffusion model IC's effectiveness in capturing surprisal aspects by examining two tasks: (1) capturing monophonic pitch surprisal, and (2) detecting segment boundaries in multi-track audio. In both tasks, the diffusion models match or exceed the performance of a GIVT. We hypothesize that the surprisal estimated at different diffusion process noise levels corresponds to the surprisal of music and audio features present at different audio granularities. Testing our hypothesis, we find that, for appropriate noise levels, the studied musical surprisal tasks' results improve. Code is provided on github.com/SonyCSLParis/audioic.