Separate This, and All of these Things Around It: Music Source Separation via Hyperellipsoidal Queries

📅 2025-01-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional music source separation is constrained by predefined source categories, limiting flexibility in identifying arbitrary sound sources. To address this, we propose an open-vocabulary, region-driven arbitrary source separation method. Our core contribution is the first use of learnable and interpretable superellipsoidal regions as source queries: users specify target sources jointly via semantic descriptions and spatial coordinates, bypassing fixed-class constraints. The architecture comprises a query encoder, a superellipsoid parameterization module, and a region-aware mask generator, trained end-to-end on MoisesDB. Experiments demonstrate state-of-the-art performance on both SI-SNR and mAP@0.5, significantly advancing fine-grained and data-efficient sound component retrieval.

Technology Category

Application Category

📝 Abstract
Music source separation is an audio-to-audio retrieval task of extracting one or more constituent components, or composites thereof, from a musical audio mixture. Each of these constituent components is often referred to as a"stem"in literature. Historically, music source separation has been dominated by a stem-based paradigm, leading to most state-of-the-art systems being either a collection of single-stem extraction models, or a tightly coupled system with a fixed, difficult-to-modify, set of supported stems. Combined with the limited data availability, advances in music source separation have thus been mostly limited to the"VDBO"set of stems: extit{vocals}, extit{drum}, extit{bass}, and the catch-all extit{others}. Recent work in music source separation has begun to challenge the fixed-stem paradigm, moving towards models able to extract any musical sound as long as this target type of sound could be specified to the model as an additional query input. We generalize this idea to a extit{query-by-region} source separation system, specifying the target based on the query regardless of how many sound sources or which sound classes are contained within it. To do so, we propose the use of hyperellipsoidal regions as queries to allow for an intuitive yet easily parametrizable approach to specifying both the target (location) as well as its spread. Evaluation of the proposed system on the MoisesDB dataset demonstrated state-of-the-art performance of the proposed system both in terms of signal-to-noise ratios and retrieval metrics.
Problem

Research questions and friction points this paper is trying to address.

Music Separation
Specific Instrument Isolation
Complex Music
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hyperellipsoidal Querying
Music Source Separation
User-specified Region
🔎 Similar Papers
No similar papers found.