🤖 AI Summary
Existing text-to-speech (TTS) methods fail to effectively leverage large language models’ (LLMs) instruction-following capabilities, limiting controllability and cross-lingual generalization in TTS. This paper proposes BatonVoice—a decoupled, instruction-driven TTS framework inspired by operationalism: an LLM acts as the “conductor,” parsing user instructions into a textual control plan encoding acoustic attributes (e.g., pitch, energy); a dedicated TTS model, BatonTTS, serves as the “orchestra,” faithfully synthesizing speech from this plan. To our knowledge, this is the first work to introduce the operationalist paradigm to TTS, explicitly separating instruction interpretation from acoustic generation. BatonVoice enables zero-shot cross-lingual control and significantly outperforms state-of-the-art open-source and proprietary baselines on controllable and expressive TTS tasks, demonstrating exceptional generalization to unseen languages.
📝 Abstract
The rise of Large Language Models (LLMs) is reshaping multimodel models, with speech synthesis being a prominent application. However, existing approaches often underutilize the linguistic intelligence of these models, typically failing to leverage their powerful instruction-following capabilities. This limitation hinders the model's ability to follow text instructions for controllable Text-to-Speech~(TTS). To address this, we propose a new paradigm inspired by ``operationalism'' that decouples instruction understanding from speech generation. We introduce BatonVoice, a framework where an LLM acts as a ``conductor'', understanding user instructions and generating a textual ``plan'' -- explicit vocal features (e.g., pitch, energy). A separate TTS model, the ``orchestra'', then generates the speech from these features. To realize this component, we develop BatonTTS, a TTS model trained specifically for this task. Our experiments demonstrate that BatonVoice achieves strong performance in controllable and emotional speech synthesis, outperforming strong open- and closed-source baselines. Notably, our approach enables remarkable zero-shot cross-lingual generalization, accurately applying feature control abilities to languages unseen during post-training. This demonstrates that objectifying speech into textual vocal features can more effectively unlock the linguistic intelligence of LLMs.