🤖 AI Summary
Existing speech-language models (SLMs) lack systematic investigation into natural language instruction-driven controllable speech generation—particularly concerning voice timbre, prosody, role-playing, and implicit empathy. Method: We introduce “speech style adaptation” as a novel task, construct the bilingual (Chinese-English) benchmark VStyle covering four critical generation scenarios, design the multi-dimensional automatic evaluation framework LALM as a Judge—which jointly leverages audio understanding and generation capabilities to enable reproducible, quantitative assessment of text fidelity, style consistency, and naturalness—and release a high-quality bilingual instruction-speech dataset alongside open-source evaluation tools. Contribution/Results: Experiments reveal significant performance limitations of leading commercial and open-source SLMs on this task, confirming its challenge and establishing a new paradigm and foundational resources for controllable speech generation research.
📝 Abstract
Spoken language models (SLMs) have emerged as a unified paradigm for speech understanding and generation, enabling natural human machine interaction. However, while most progress has focused on semantic accuracy and instruction following, the ability of SLMs to adapt their speaking style based on spoken instructions has received limited attention. We introduce Voice Style Adaptation (VSA), a new task that examines whether SLMs can modify their speaking style, such as timbre, prosody, or persona following natural language spoken commands. To study this task, we present VStyle, a bilingual (Chinese & English) benchmark covering four categories of speech generation: acoustic attributes, natural language instruction, role play, and implicit empathy. We also introduce the Large Audio Language Model as a Judge (LALM as a Judge) framework, which progressively evaluates outputs along textual faithfulness, style adherence, and naturalness, ensuring reproducible and objective assessment. Experiments on commercial systems and open source SLMs demonstrate that current models face clear limitations in controllable style adaptation, highlighting both the novelty and challenge of this task. By releasing VStyle and its evaluation toolkit, we aim to provide the community with a foundation for advancing human centered spoken interaction. The dataset and code are publicly available at href{https://junzhan2000.github.io/VStyle.github.io/}{project's homepage}.