π€ AI Summary
To address the poor noise robustness and difficulty in modeling polyphonic structures in automatic piano music transcription, this paper proposes the first discrete denoising diffusion framework tailored for symbolic music generation. Methodologically: (1) it incorporates features from a pretrained acoustic model as conditional guidance to enable progressive, high-resolution piano roll prediction; (2) it introduces a stage-wise differential transition strategy and a neighborhood-attention-based dedicated denoising module, marking the first adaptation of discrete diffusionβs fine-grained modeling capability to note-level sequence generation. Evaluated on the MAESTRO dataset, our method achieves a significantly higher F1 score than all existing diffusion-based approaches and mainstream baselines. The source code is publicly available.
π Abstract
Diffusion models have been widely used in the generative domain due to their convincing performance in modeling complex data distributions. Moreover, they have shown competitive results on discriminative tasks, such as image segmentation. While diffusion models have also been explored for automatic music transcription, their performance has yet to reach a competitive level. In this paper, we focus on discrete diffusion model's refinement capabilities and present a novel architecture for piano transcription. Our model utilizes Neighborhood Attention layers as the denoising module, gradually predicting the target high-resolution piano roll, conditioned on the finetuned features of a pretrained acoustic model. To further enhance refinement, we devise a novel strategy which applies distinct transition states during training and inference stage of discrete diffusion models. Experiments on the MAESTRO dataset show that our approach outperforms previous diffusion-based piano transcription models and the baseline model in terms of F1 score. Our code is available in https://github.com/hanshounsu/d3rm.