Count The Notes: Histogram-Based Supervision for Automatic Music Transcription

📅 2025-11-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Automatic music transcription (AMT) heavily relies on costly and scarce frame-level annotations; existing weakly supervised approaches still require local alignment—e.g., dynamic time warping (DTW)—leading to error propagation and high computational overhead. This paper proposes a novel weakly supervised learning framework that eliminates the need for local alignment. We introduce note-event histograms—i.e., segment-level note frequency statistics—as the sole supervisory signal, and integrate them with an expectation-maximization (EM) algorithm and deep neural networks for end-to-end optimization. Crucially, our method dispenses entirely with DTW and soft alignment mechanisms, requiring only segment-level annotations during training, thereby substantially reducing annotation cost and computational complexity. Evaluated on piano, guitar, and multi-instrument datasets, our approach achieves performance on par with or surpassing state-of-the-art weakly supervised methods, particularly demonstrating enhanced robustness and scalability in multi-instrument scenarios.

Technology Category

Application Category

📝 Abstract
Automatic Music Transcription (AMT) converts audio recordings into symbolic musical representations. Training deep neural networks (DNNs) for AMT typically requires strongly aligned training pairs with precise frame-level annotations. Since creating such datasets is costly and impractical for many musical contexts, weakly aligned approaches using segment-level annotations have gained traction. However, existing methods often rely on Dynamic Time Warping (DTW) or soft alignment loss functions, both of which still require local semantic correspondences, making them error-prone and computationally expensive. In this article, we introduce CountEM, a novel AMT framework that eliminates the need for explicit local alignment by leveraging note event histograms as supervision, enabling lighter computations and greater flexibility. Using an Expectation-Maximization (EM) approach, CountEM iteratively refines predictions based solely on note occurrence counts, significantly reducing annotation efforts while maintaining high transcription accuracy. Experiments on piano, guitar, and multi-instrument datasets demonstrate that CountEM matches or surpasses existing weakly supervised methods, improving AMT's robustness, scalability, and efficiency. Our project page is available at https://yoni-yaffe.github.io/count-the-notes.
Problem

Research questions and friction points this paper is trying to address.

Reduces reliance on costly frame-level annotations for music transcription
Eliminates explicit local alignment requirements in weakly supervised AMT
Improves computational efficiency while maintaining transcription accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses note event histograms for supervision
Employs Expectation-Maximization to refine predictions
Eliminates need for explicit local alignment
🔎 Similar Papers
No similar papers found.
J
Jonathan Yaffe
Tel Aviv University, Israel
B
Ben Maman
International Audio Laboratories Erlangen
Meinard Müller
Meinard Müller
International Audio Laboratories Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU)
Music Information RetrievalAudio Signal ProcessingSound and Music ProcessingComputational
A
Amit H. Bermano
Tel Aviv University, Israel