Large Language Models Pass the Turing Test

📅 2025-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study empirically examines whether large language models (LLMs) can pass a standardized double-blind, three-party controlled Turing test. Method: A preregistered randomized controlled trial assessed human judges’ misattribution rates—i.e., classifying LLMs as human—during 5-minute real-time dialogues across four systems: ELIZA, GPT-4o, LLaMA-3.1-405B, and GPT-4.5. All models were evaluated under both baseline and anthropomorphic prompting conditions. Contribution/Results: We provide the first rigorous empirical evidence of LLMs passing this Turing test paradigm: under anthropomorphic prompting, GPT-4.5 achieved a human-identification rate of 73%, significantly exceeding the human baseline; LLaMA-3.1-405B reached 56%, statistically indistinguishable from humans; in contrast, ELIZA (23%) and GPT-4o (21%) performed significantly below chance. These findings underscore the critical role of anthropomorphic prompting in shaping perceived intelligence and establish a novel benchmark for evaluating the societal implications of AI anthropomorphism.

Technology Category

Application Category

📝 Abstract
We evaluated 4 systems (ELIZA, GPT-4o, LLaMa-3.1-405B, and GPT-4.5) in two randomised, controlled, and pre-registered Turing tests on independent populations. Participants had 5 minute conversations simultaneously with another human participant and one of these systems before judging which conversational partner they thought was human. When prompted to adopt a humanlike persona, GPT-4.5 was judged to be the human 73% of the time: significantly more often than interrogators selected the real human participant. LLaMa-3.1, with the same prompt, was judged to be the human 56% of the time -- not significantly more or less often than the humans they were being compared to -- while baseline models (ELIZA and GPT-4o) achieved win rates significantly below chance (23% and 21% respectively). The results constitute the first empirical evidence that any artificial system passes a standard three-party Turing test. The results have implications for debates about what kind of intelligence is exhibited by Large Language Models (LLMs), and the social and economic impacts these systems are likely to have.
Problem

Research questions and friction points this paper is trying to address.

Evaluating if LLMs can pass a standard Turing test
Comparing humanlike performance of different AI systems
Assessing social and economic impacts of advanced LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Randomised controlled Turing test evaluation
Humanlike persona prompting technique
Three-party conversational comparison method
🔎 Similar Papers
No similar papers found.
Cameron R. Jones
Cameron R. Jones
Postdoc, UC San Diego
large language modelsturing testsocial intelligence
B
Benjamin K. Bergen
Department of Cognitive Science, UC San Diego, San Diego, CA 92119