Human or Machine? A Preliminary Turing Test for Speech-to-Speech Interaction

📅 2026-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of systematic evaluation of humanness in current speech-to-speech (S2S) systems by proposing the first Turing test framework tailored for S2S. Drawing on nearly 3,000 human judgments, the study establishes a fine-grained humanness taxonomy spanning 18 dimensions and develops an interpretable discriminative model capable of automatically distinguishing human–machine interactions. Experimental results demonstrate that none of the evaluated S2S systems pass the proposed Turing test, with key limitations rooted in paralinguistic cues, emotional expressiveness, and conversational persona. The introduced model achieves high accuracy while maintaining transparency in human–machine classification, offering actionable diagnostic insights to guide future system improvements.

Technology Category

Application Category

📝 Abstract
The pursuit of human-like conversational agents has long been guided by the Turing test. For modern speech-to-speech (S2S) systems, a critical yet unanswered question is whether they can converse like humans. To tackle this, we conduct the first Turing test for S2S systems, collecting 2,968 human judgments on dialogues between 9 state-of-the-art S2S systems and 28 human participants. Our results deliver a clear finding: no existing evaluated S2S system passes the test, revealing a significant gap in human-likeness. To diagnose this failure, we develop a fine-grained taxonomy of 18 human-likeness dimensions and crowd-annotate our collected dialogues accordingly. Our analysis shows that the bottleneck is not semantic understanding but stems from paralinguistic features, emotional expressivity, and conversational persona. Furthermore, we find that off-the-shelf AI models perform unreliably as Turing test judges. In response, we propose an interpretable model that leverages the fine-grained human-likeness ratings and delivers accurate and transparent human-vs-machine discrimination, offering a powerful tool for automatic human-likeness evaluation. Our work establishes the first human-likeness evaluation for S2S systems and moves beyond binary outcomes to enable detailed diagnostic insights, paving the way for human-like improvements in conversational AI systems.
Problem

Research questions and friction points this paper is trying to address.

speech-to-speech
Turing test
human-likeness
conversational AI
paralinguistic features
Innovation

Methods, ideas, or system contributions that make the work stand out.

speech-to-speech systems
Turing test
human-likeness evaluation
paralinguistic features
interpretable model
🔎 Similar Papers
No similar papers found.
X
Xiang Li
State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications; Shenzhen Research Institute of Big Data; The Chinese University of Hong Kong, Shenzhen; Shenzhen Loop Area Institute
J
Jiabao Gao
The Chinese University of Hong Kong, Shenzhen
S
Sipei Lin
The Chinese University of Hong Kong, Shenzhen
X
Xuan Zhou
The Chinese University of Hong Kong, Shenzhen
Chi Zhang
Chi Zhang
PhD student, The Chinese University of Hong Kong, Shenzhen
Bo Cheng
Bo Cheng
Beijing University of Posts and Telecommunicaitons
Internet of Things、Services Computing
Jiale Han
Jiale Han
The Hong Kong University of Science and Technology
Natural Language Processing
Benyou Wang
Benyou Wang
Assistant Professor, The Chinese University of Hong Kong, Shenzhen
large language modelsnatural language processinginformation retrievalapplied machine learning