AudioChat: Unified Audio Storytelling, Editing, and Understanding with Transfusion Forcing

📅 2026-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing audio foundation models struggle to effectively handle “audio stories”—complex audio sequences involving multiple speakers and intricate sound effects—facing challenges in semantic coherence, temporal alignment, and physical consistency. This work proposes AudioChat, a unified framework that leverages a large language model–driven tool-calling agent to simulate user–system interactions, thereby generating structured training data. It introduces a novel training objective, Audio Transfusion Forcing, to jointly enable generation, editing, and understanding of audio stories. The approach supports multi-turn interactive audio processing and structured reasoning, overcoming the limitations of conventional single-task models. Experimental results demonstrate that AudioChat achieves superior performance on audio story tasks, further validated by three newly proposed task-specific evaluation metrics that significantly outperform generic distribution-based scoring methods.

Technology Category

Application Category

📝 Abstract
Despite recent breakthroughs, audio foundation models struggle in processing complex multi-source acoustic scenes. We refer to this challenging domain as audio stories, which can have multiple speakers and background/foreground sound effects. Compared to traditional audio processing tasks, audio stories introduce new layers of semantic, temporal, and physical complexity. To address this challenge, we propose AudioChat, a framework for developing audio foundation models that can generate, edit, and understand audio stories. AudioChat introduces a new paradigm in which LLM-based toolcalling agents simulate interactions between users and the system, and these simulated dialogues are used as training data. We also introduce a novel Audio Transfusion Forcing objective to train the AudioChat model, allowing it to simultaneously decompose high-level instructions via structured chain-of-thought reasoning and perform interactive multi-turn audio understanding/generation. To evaluate generation and editing performance, we develop three new metrics that directly measure task performance instead of relying upon distribution-based scoring. We highly encourage readers to visit our demo to better understand the capabilities of AudioChat: https://wanchichen.github.io/audiochat/.
Problem

Research questions and friction points this paper is trying to address.

audio stories
multi-source acoustic scenes
audio foundation models
complex audio understanding
audio generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

AudioChat
Audio Transfusion Forcing
audio storytelling
tool-calling agents
multi-turn audio understanding
🔎 Similar Papers
No similar papers found.