🤖 AI Summary
This work proposes SemanticAudio, a novel framework for text-to-audio generation that addresses the challenge of fine-grained semantic alignment inherent in existing methods operating directly in acoustic latent spaces. SemanticAudio introduces a decoupled two-stage pipeline: a semantic planner first generates compact, high-level semantic features from text, which are then rendered into high-fidelity audio by an acoustic synthesizer. The framework integrates variational autoencoders, flow matching, and explicit semantic-acoustic disentanglement modeling. Additionally, it incorporates a training-free, text-guided editing mechanism that enables precise attribute-level modifications of general audio content. Experimental results demonstrate that SemanticAudio significantly outperforms current approaches in semantic alignment accuracy, audio generation quality, and editing flexibility.
📝 Abstract
In recent years, Text-to-Audio Generation has achieved remarkable progress, offering sound creators powerful tools to transform textual inspirations into vivid audio. However, existing models predominantly operate directly in the acoustic latent space of a Variational Autoencoder (VAE), often leading to suboptimal alignment between generated audio and textual descriptions. In this paper, we introduce SemanticAudio, a novel framework that conducts both audio generation and editing directly in a high-level semantic space. We define this semantic space as a compact representation capturing the global identity and temporal sequence of sound events, distinct from fine-grained acoustic details. SemanticAudio employs a two-stage Flow Matching architecture: the Semantic Planner first generates these compact semantic features to sketch the global semantic layout, and the Acoustic Synthesizer subsequently produces high-fidelity acoustic latents conditioned on this semantic plan. Leveraging this decoupled design, we further introduce a training-free text-guided editing mechanism that enables precise attribute-level modifications on general audio without retraining. Specifically, this is achieved by steering the semantic generation trajectory via the difference of velocity fields derived from source and target text prompts. Extensive experiments demonstrate that SemanticAudio surpasses existing mainstream approaches in semantic alignment. Demo available at: https://semanticaudio1.github.io/