AsymPuzl: An Asymmetric Puzzle for multi-agent cooperation

📅 2025-12-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates multi-round collaborative communication among large language model (LLM) agents under information asymmetry. To this end, we introduce AsymPuzl—a lightweight, symbolic two-agent puzzle environment requiring joint reasoning via exactly two rounds of message exchange, where agents possess complementary, incomplete observations. We systematically evaluate communication efficacy across leading open- and closed-source LLMs under self-feedback and joint-feedback strategies. Results show that stronger models (e.g., GPT-5, Claude-4.0) consistently achieve information complementarity and correct task resolution; weaker models frequently ignore messages or over-correct hypotheses; and feedback mechanisms exhibit strong nonlinearity—simple self-feedback alone substantially boosts success rates for weaker models. This study provides the first quantitative characterization of inter-LLM communication strategy divergence in a controlled symbolic setting, offering novel empirical evidence and a methodological framework for designing multi-agent coordination mechanisms.

Technology Category

Application Category

📝 Abstract
Large Language Model (LLM) agents are increasingly studied in multi-turn, multi-agent scenarios, yet most existing setups emphasize open-ended role-play rather than controlled evaluation. We introduce AsymPuzl, a minimal but expressive two-agent puzzle environment designed to isolate communication under information asymmetry. Each agent observes complementary but incomplete views of a symbolic puzzle and must exchange messages to solve it cooperatively. Using a diverse set of current-generation and open-source LLMs, we show that (i) strong models such as GPT-5 and Claude-4.0 reliably converge across puzzle sizes on the solution by sharing complete information in two turns, (ii) weaker models often ignore partner messages or over-correct their hypotheses, and (iii) feedback design is non-trivial: simple self-feedback improves success rates, while detailed joint feedback can hurt performance. These findings show that even in simple cooperative tasks, LLM communication strategies diverge and depend on the granularity of feedback signals. AsymPuzl thus provides a testbed for probing the limits of multi-turn cooperation and opens avenues for studying coordination mechanisms.
Problem

Research questions and friction points this paper is trying to address.

Evaluates multi-agent LLM communication under information asymmetry
Tests cooperative puzzle-solving with complementary incomplete views
Probes feedback impact on LLM coordination in simple tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-agent puzzle environment isolates communication asymmetry
Agents exchange messages with complementary incomplete puzzle views
Feedback design impacts success rates in cooperative tasks
X
Xavier Cadet
Dartmouth College, Hanover, NH 03755
Edward Koh
Edward Koh
Dartmouth College
Artificial IntelligenceMachine Learning
P
Peter Chin
Dartmouth College, Hanover, NH 03755