🤖 AI Summary
This work proposes a controllable audio generation method based on the DiT architecture to address the inefficiency of manual audio extension and blending in sound design. By applying masks in the latent noise space and incorporating an improved classifier-free guidance strategy, the approach enables bidirectional extension of a single audio clip and seamless fusion of two audio clips. The method introduces a novel masked latent guidance mechanism and employs targeted fine-tuning for stationary audio segments, effectively mitigating generation artifacts and hallucinations. Experimental results demonstrate that the generated audio achieves a Fréchet Audio Distance (FAD) close to that of real samples, and subjective listening tests yield consistently positive evaluations.
📝 Abstract
In audio-related creative tasks, sound designers often seek to extend and morph different sounds from their libraries. Generative audio models, capable of creating audio using examples as references, offer promising solutions. By masking the noisy latents of a DiT and applying a novel variant of classifier-free guidance on such masked latents, we demonstrate that: (i) given an audio reference, we can extend it both forward and backward for a specified duration, and (ii) given two audio references, we can morph them seamlessly for the desired duration. Furthermore, we show that by fine-tuning the model on different types of stationary audio data we mitigate potential hallucinations. The effectiveness of our method is supported by objective metrics, with the generated audio achieving Fréchet Audio Distances (FADs) comparable to those of real samples from the training data. Additionally, we validate our results through a subjective listener test, where subjects gave positive ratings to the proposed model generations. This technique paves the way for more controllable and expressive generative sound frameworks, enabling sound designers to focus less on tedious, repetitive tasks and more on their actual creative process.