Multi-agent cooperation through in-context co-player inference

📅 2026-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates how self-interested agents can achieve effective cooperation in multi-agent reinforcement learning. To this end, it proposes a decentralized approach based on sequence models that leverages their in-context learning capabilities to enable rapid within-episode adaptation and the spontaneous emergence of cooperative strategies—without requiring explicit communication, hard-coded assumptions, or separation of timescales. By training agents across a diverse distribution of co-players, the method uniquely employs sequence models to implicitly capture co-player learning awareness, naturally giving rise to reciprocity-based cooperation. The resulting behavior aligns with the theoretical prediction of “vulnerability to extortion driving mutual shaping,” thereby validating the efficacy of this mechanism in fostering stable cooperation.

Technology Category

Application Category

📝 Abstract
Achieving cooperation among self-interested agents remains a fundamental challenge in multi-agent reinforcement learning. Recent work showed that mutual cooperation can be induced between "learning-aware" agents that account for and shape the learning dynamics of their co-players. However, existing approaches typically rely on hardcoded, often inconsistent, assumptions about co-player learning rules or enforce a strict separation between "naive learners" updating on fast timescales and "meta-learners" observing these updates. Here, we demonstrate that the in-context learning capabilities of sequence models allow for co-player learning awareness without requiring hardcoded assumptions or explicit timescale separation. We show that training sequence model agents against a diverse distribution of co-players naturally induces in-context best-response strategies, effectively functioning as learning algorithms on the fast intra-episode timescale. We find that the cooperative mechanism identified in prior work-where vulnerability to extortion drives mutual shaping-emerges naturally in this setting: in-context adaptation renders agents vulnerable to extortion, and the resulting mutual pressure to shape the opponent's in-context learning dynamics resolves into the learning of cooperative behavior. Our results suggest that standard decentralized reinforcement learning on sequence models combined with co-player diversity provides a scalable path to learning cooperative behaviors.
Problem

Research questions and friction points this paper is trying to address.

multi-agent cooperation
reinforcement learning
self-interested agents
co-player learning
in-context learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

in-context learning
multi-agent cooperation
sequence models
learning-aware agents
co-player diversity
🔎 Similar Papers
No similar papers found.