🤖 AI Summary
This work addresses the need for fine-grained instrument editing in text-to-music diffusion models. We propose a zero-shot audio editing method that requires no fine-tuning or retraining. Our approach leverages inherent temporal semantic evolution patterns within a pre-trained diffusion model to identify an optimal intermediate denoising step for intervention; a lightweight instrument classifier then guides targeted timbre replacement while preserving structural elements—including melody, rhythm, and harmony—unchanged. To our knowledge, this is the first method to establish an “intermediate-step selection” editing paradigm, enabling controllable, forward-sampling–time intervention. Experiments demonstrate significant improvements in both audio fidelity and editing controllability over baseline methods. Crucially, the approach imposes no computational overhead on the original model’s inference pipeline, fully preserving its native speed and enabling real-time, interactive music composition.
📝 Abstract
Breakthroughs in text-to-music generation models are transforming the creative landscape, equipping musicians with innovative tools for composition and experimentation like never before. However, controlling the generation process to achieve a specific desired outcome remains a significant challenge. Even a minor change in the text prompt, combined with the same random seed, can drastically alter the generated piece. In this paper, we explore the application of existing text-to-music diffusion models for instrument editing. Specifically, for an existing audio track, we aim to leverage a pretrained text-to-music diffusion model to edit the instrument while preserving the underlying content. Based on the insight that the model first focuses on the overall structure or content of the audio, then adds instrument information, and finally refines the quality, we show that selecting a well-chosen intermediate timestep, identified through an instrument classifier, yields a balance between preserving the original piece's content and achieving the desired timbre. Our method does not require additional training of the text-to-music diffusion model, nor does it compromise the generation process's speed.