VStyle: A Benchmark for Voice Style Adaptation with Spoken Instructions

📅 2025-09-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing speech-language models (SLMs) lack systematic investigation into natural language instruction-driven controllable speech generation—particularly concerning voice timbre, prosody, role-playing, and implicit empathy. Method: We introduce “speech style adaptation” as a novel task, construct the bilingual (Chinese-English) benchmark VStyle covering four critical generation scenarios, design the multi-dimensional automatic evaluation framework LALM as a Judge—which jointly leverages audio understanding and generation capabilities to enable reproducible, quantitative assessment of text fidelity, style consistency, and naturalness—and release a high-quality bilingual instruction-speech dataset alongside open-source evaluation tools. Contribution/Results: Experiments reveal significant performance limitations of leading commercial and open-source SLMs on this task, confirming its challenge and establishing a new paradigm and foundational resources for controllable speech generation research.

Technology Category

Application Category

📝 Abstract
Spoken language models (SLMs) have emerged as a unified paradigm for speech understanding and generation, enabling natural human machine interaction. However, while most progress has focused on semantic accuracy and instruction following, the ability of SLMs to adapt their speaking style based on spoken instructions has received limited attention. We introduce Voice Style Adaptation (VSA), a new task that examines whether SLMs can modify their speaking style, such as timbre, prosody, or persona following natural language spoken commands. To study this task, we present VStyle, a bilingual (Chinese & English) benchmark covering four categories of speech generation: acoustic attributes, natural language instruction, role play, and implicit empathy. We also introduce the Large Audio Language Model as a Judge (LALM as a Judge) framework, which progressively evaluates outputs along textual faithfulness, style adherence, and naturalness, ensuring reproducible and objective assessment. Experiments on commercial systems and open source SLMs demonstrate that current models face clear limitations in controllable style adaptation, highlighting both the novelty and challenge of this task. By releasing VStyle and its evaluation toolkit, we aim to provide the community with a foundation for advancing human centered spoken interaction. The dataset and code are publicly available at href{https://junzhan2000.github.io/VStyle.github.io/}{project's homepage}.
Problem

Research questions and friction points this paper is trying to address.

Assess SLM voice style adaptation via spoken commands
Evaluate style changes in timbre, prosody, and persona
Benchmark bilingual speech generation with objective evaluation framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

Voice Style Adaptation task for spoken commands
Bilingual benchmark covering four speech categories
LALM as a Judge framework for evaluation
🔎 Similar Papers
No similar papers found.
J
Jun Zhan
Fudan University
M
Mingyang Han
Alibaba Group
Y
Yuxuan Xie
Fudan University
C
Chen Wang
Alibaba Group
D
Dong Zhang
Fudan University
K
Kexin Huang
Fudan University
Haoxiang Shi
Haoxiang Shi
Waseda University
Nature Language ProcessingDense Retrieve
D
DongXiao Wang
Alibaba Group
T
Tengtao Song
Alibaba Group
Q
Qinyuan Cheng
Fudan University
Shimin Li
Shimin Li
Fudan University
Large Language ModelSpeech Language Model
Jun Song
Jun Song
Shenzhen University
nanophotonics
X
Xipeng Qiu
Fudan University
B
Bo Zheng
Fudan University