🤖 AI Summary
Automatic music transcription (AMT) heavily relies on costly and scarce frame-level annotations; existing weakly supervised approaches still require local alignment—e.g., dynamic time warping (DTW)—leading to error propagation and high computational overhead. This paper proposes a novel weakly supervised learning framework that eliminates the need for local alignment. We introduce note-event histograms—i.e., segment-level note frequency statistics—as the sole supervisory signal, and integrate them with an expectation-maximization (EM) algorithm and deep neural networks for end-to-end optimization. Crucially, our method dispenses entirely with DTW and soft alignment mechanisms, requiring only segment-level annotations during training, thereby substantially reducing annotation cost and computational complexity. Evaluated on piano, guitar, and multi-instrument datasets, our approach achieves performance on par with or surpassing state-of-the-art weakly supervised methods, particularly demonstrating enhanced robustness and scalability in multi-instrument scenarios.
📝 Abstract
Automatic Music Transcription (AMT) converts audio recordings into symbolic musical representations. Training deep neural networks (DNNs) for AMT typically requires strongly aligned training pairs with precise frame-level annotations. Since creating such datasets is costly and impractical for many musical contexts, weakly aligned approaches using segment-level annotations have gained traction. However, existing methods often rely on Dynamic Time Warping (DTW) or soft alignment loss functions, both of which still require local semantic correspondences, making them error-prone and computationally expensive. In this article, we introduce CountEM, a novel AMT framework that eliminates the need for explicit local alignment by leveraging note event histograms as supervision, enabling lighter computations and greater flexibility. Using an Expectation-Maximization (EM) approach, CountEM iteratively refines predictions based solely on note occurrence counts, significantly reducing annotation efforts while maintaining high transcription accuracy. Experiments on piano, guitar, and multi-instrument datasets demonstrate that CountEM matches or surpasses existing weakly supervised methods, improving AMT's robustness, scalability, and efficiency. Our project page is available at https://yoni-yaffe.github.io/count-the-notes.