AudioX: Diffusion Transformer for Anything-to-Audio Generation

📅 2025-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing audio/music generation methods suffer from modality fragmentation, scarcity of high-quality multimodal paired data, and challenges in fusing diverse input modalities. This paper proposes a unified Diffusion Transformer framework enabling end-to-end generation of high-fidelity general-purpose audio and music from arbitrary input modalities—including text, images, video, audio, and music. Key contributions are: (1) a novel multimodal masked training strategy that enforces robust cross-modal representation learning; (2) two large-scale, high-quality paired datasets—VGGSound-Caps (190K samples) and V2M-Caps (6M samples); and (3) an integrated architecture incorporating modality adapters, cross-modal conditional injection, and contrastive distillation. Experiments demonstrate state-of-the-art or superior performance across multiple audio/music generation benchmarks, with significant improvements in generalization, controllability, and cross-modal compatibility compared to specialized models.

Technology Category

Application Category

📝 Abstract
Audio and music generation have emerged as crucial tasks in many applications, yet existing approaches face significant limitations: they operate in isolation without unified capabilities across modalities, suffer from scarce high-quality, multi-modal training data, and struggle to effectively integrate diverse inputs. In this work, we propose AudioX, a unified Diffusion Transformer model for Anything-to-Audio and Music Generation. Unlike previous domain-specific models, AudioX can generate both general audio and music with high quality, while offering flexible natural language control and seamless processing of various modalities including text, video, image, music, and audio. Its key innovation is a multi-modal masked training strategy that masks inputs across modalities and forces the model to learn from masked inputs, yielding robust and unified cross-modal representations. To address data scarcity, we curate two comprehensive datasets: vggsound-caps with 190K audio captions based on the VGGSound dataset, and V2M-caps with 6 million music captions derived from the V2M dataset. Extensive experiments demonstrate that AudioX not only matches or outperforms state-of-the-art specialized models, but also offers remarkable versatility in handling diverse input modalities and generation tasks within a unified architecture. The code and datasets will be available at https://zeyuet.github.io/AudioX/
Problem

Research questions and friction points this paper is trying to address.

Unified model for anything-to-audio and music generation
Addresses limitations in multi-modal training data scarcity
Enhances cross-modal integration with masked training strategy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified Diffusion Transformer for audio generation
Multi-modal masked training for robust representations
Comprehensive datasets to address data scarcity
🔎 Similar Papers
No similar papers found.