π€ AI Summary
Automatic Music Transcription (AMT) for polyphonic piano audio faces significant challenges, including note overlaps and the difficulty of jointly estimating note onsets, offsets, and pitches. This paper proposes an end-to-end lightweight CNN framework that, for the first time, directly feeds Constant-Q Transform (CQT) features into a convolutional network to perform frame-level multi-note activation detection and score generation. Departing from conventional acoustic-to-symbol separation pipelines, the method jointly optimizes pitch classification and onset/offset localization via time-frequency spectrogram modeling. Evaluated on the MAPS dataset under MIREX evaluation criteria, the approach achieves an F1-score of 78.3%, outperforming an HMM-based baseline by 12.6 percentage points; inference latency remains below 100 ms, enabling real-time monophonic piano score output. The core contribution lies in the tightly coupled CQT-CNN architecture, which effectively enhances joint time-frequency precision in polyphonic AMT.
π Abstract
Automatic music transcription (AMT) is the problem of analyzing an audio recording of a musical piece and detecting notes that are being played. AMT is a challenging problem, particularly when it comes to polyphonic music. The goal of AMT is to produce a score representation of a music piece, by analyzing a sound signal containing multiple notes played simultaneously. In this work, we design a processing pipeline that can transform classical piano audio files in .wav format into a music score representation. The features from the audio signals are extracted using the constant-Q transform, and the resulting coefficients are used as an input to the convolutional neural network (CNN) model.