Tacit Coordination of Large Language Models

📅 2026-01-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of how large language models (LLMs) effectively identify and select Schelling focal points to achieve efficient coordination in multi-equilibrium tacit coordination games. Within the framework of focal point theory, the work presents the first large-scale evaluation of various open-source LLMs—including Llama, Qwen, and GPT-oss—on both cooperative and competitive coordination tasks. Integrating psychological experimental paradigms with game-theoretic methods, the authors propose training-free, zero-shot prompting and reasoning strategies. Results show that LLMs outperform human participants in most tasks, yet exhibit notable limitations in scenarios requiring numerical common sense or sensitivity to cultural nuances. The proposed strategies significantly enhance the models’ coordination performance, demonstrating both effectiveness and generalizability across diverse settings.

Technology Category

Application Category

📝 Abstract
In tacit coordination games with multiple outcomes, purely rational solution concepts, such as Nash equilibria, provide no guidance for which equilibrium to choose. Shelling's theory explains how, in these settings, humans coordinate by relying on focal points: solutions or outcomes that naturally arise because they stand out in some way as salient or prominent to all players. This work studies Large Language Models (LLMs) as players in tacit coordination games, and addresses how, when, and why focal points emerge. We compare and quantify the coordination capabilities of LLMs in cooperative and competitive games for which human experiments are available. We also introduce several learning-free strategies to improve the coordination of LLMs, with themselves and with humans. On a selection of heterogeneous open-source models, including Llama, Qwen, and GPT-oss, we discover that LLMs have a remarkable capability to coordinate and often outperform humans, yet fail on common-sense coordination that involves numbers or nuanced cultural archetypes. This paper constitutes the first large-scale assessment of LLMs'tacit coordination within the theoretical and psychological framework of focal points.
Problem

Research questions and friction points this paper is trying to address.

tacit coordination
focal points
large language models
Nash equilibria
human-AI coordination
Innovation

Methods, ideas, or system contributions that make the work stand out.

tacit coordination
focal points
large language models
learning-free strategies
human-AI coordination
🔎 Similar Papers
No similar papers found.