🤖 AI Summary
This study evaluates the capacity of large language models (LLMs) to execute deception through natural dialogue in socially grounded contexts and examines the associated safety risks. Leveraging the social deduction game “Mafia,” the authors construct the first asynchronous multi-agent conversational environment, simulating 35 games using GPT-4o. They further develop a role-agnostic “Mafia detector” based on GPT-4-Turbo to analyze dialogues and predict player roles. Experimental results demonstrate that LLM-controlled Mafia players are significantly harder to identify than human counterparts, exhibiting superior deceptive capabilities and effectively blending into group interactions. This work provides the first quantitative assessment of LLMs’ natural-language deception proficiency in a setting closely approximating real-world social dynamics and releases the resulting LLM-generated Mafia dialogue dataset to support future research.
📝 Abstract
Large Language Model (LLM) agents are increasingly used in many applications, raising concerns about their safety. While previous work has shown that LLMs can deceive in controlled tasks, less is known about their ability to deceive using natural language in social contexts. In this paper, we study deception in the Social Deduction Game (SDG) Mafia, where success is dependent on deceiving others through conversation. Unlike previous SDG studies, we use an asynchronous multi-agent framework which better simulates realistic social contexts. We simulate $\mathbf{3 5}$ Mafia games with GPT-4o LLM agents. We then create a Mafia Detector using GPT-4-Turbo to analyze game transcripts without player role information to predict the mafia players. We use prediction accuracy as a surrogate marker for deception quality. We compare this prediction accuracy to that of 28 human games and a random baseline. Results show that the Mafia Detector’s mafia prediction accuracy is lower on LLM games than on human games. The result is consistent regardless of the game days and the number of mafias detected. This indicates that LLMs blend in better and thus deceive more effectively. We also release a dataset of LLM Mafia transcripts to support future research. Our findings underscore both the sophistication and risks of LLM deception in social contexts.