🤖 AI Summary
This work proposes Mix2Morph, a novel approach for high-quality and controllable dominant-injected voice morphing in the absence of dedicated voice morphing datasets. Built upon text-to-audio diffusion models, Mix2Morph fine-tunes the model at high diffusion timesteps using noise-mixed audio as a supervisory signal to explicitly guide the learning of controllable timbre and texture injection. To the best of our knowledge, this is the first method capable of achieving high-fidelity voice morphing without requiring specialized morphing data. Experimental results demonstrate that Mix2Morph significantly outperforms existing baselines in both objective metrics and subjective listening tests, generating perceptually natural and structurally coherent morphed audio across diverse sound categories.
📝 Abstract
We introduce Mix2Morph, a text-to-audio diffusion model fine-tuned to perform sound morphing without a dedicated dataset of morphs. By finetuning on noisy surrogate mixes at higher diffusion timesteps, Mix2Morph yields stable, perceptually coherent morphs that convincingly integrate qualities of both sources. We specifically target sound infusions, a practically and perceptually motivated subclass of morphing in which one sound acts as the dominant primary source, providing overall temporal and structural behavior, while a secondary sound is infused throughout, enriching its timbral and textural qualities. Objective evaluations and listening tests show that Mix2Morph outperforms prior baselines and produces high-quality sound infusions across diverse categories, representing a step toward more controllable and concept-driven tools for sound design. Sound examples are available at https://anniejchu.github.io/mix2morph .