🤖 AI Summary
This work proposes a knowledge-driven, self-supervised approach to audio segmentation and source separation that circumvents the reliance on large-scale manually annotated data. By integrating external prior knowledge—such as musical scores—into the audio processing pipeline for the first time, the method leverages hidden Markov models to achieve effective segmentation and separation of music and film audio without requiring labeled training data. Evaluated on synthetic datasets, the approach demonstrates strong performance, and in real-world film soundtrack tests, it significantly outperforms purely data-driven methods when incorporating sound-class priors. This represents a notable advance toward annotation-free audio analysis through principled integration of domain knowledge.
📝 Abstract
We propose a knowledge-driven, model-based approach to segmenting audio into single-category and mixed-category chunks with applications to source separation. "Knowledge" here denotes information associated with the data, such as music scores. "Model" here refers to tool that can be used for audio segmentation and recognition, such as hidden Markov models. In contrast to conventional learning that often relies on annotated data with given segment categories and their corresponding boundaries to guide the learning process, the proposed framework does not depend on any pre-segmented training data and learns directly from the input audio and its related knowledge sources to build all necessary models autonomously. Evaluation on simulation data shows that score-guided learning achieves very good music segmentation and separation results. Tested on movie track data for cinematic audio source separation also shows that utilizing sound category knowledge achieves better separation results than those obtained with data-driven techniques without using such information.