π€ AI Summary
This study investigates how large language model (LLM) agents achieve coordination through implicit numerical signals in the absence of explicit communication. Framed within game theory, the research employs multi-agent simulations, LLM-based reasoning, and controlled experiments across four canonical game scenarios to systematically analyze the influence of communication paradigms, agent personality traits, and repeated interactions on emergent coordination behavior. The work reveals, for the first time, that LLM agents can spontaneously develop implicit numerical coordination mechanisms under specific conditions, and it elucidates the structural prerequisites and strategic consequences of such mechanisms. Experimental results demonstrate that, within certain game structures and under repeated interaction, this implicit coordination significantly enhances both cooperative efficiency and strategic payoffs.
π Abstract
LLMs-based agents increasingly operate in multi-agent environments where strategic interaction and coordination are required. While existing work has largely focused on individual agents or on interacting agents sharing explicit communication, less is known about how interacting agents coordinate implicitly. In particular, agents may engage in covert communication, relying on indirect or non-linguistic signals embedded in their actions rather than on explicit messages. This paper presents a game-theoretic study of covert communication in LLM-driven multi-agent systems. We analyse interactions across four canonical game-theoretic settings under different communication regimes, including explicit, restricted, and absent communication. Considering heterogeneous agent personalities and both one-shot and repeated games, we characterise when covert signals emerge and how they shape coordination and strategic outcomes.