Provable Cooperative Multi-Agent Exploration for Reward-Free MDPs

📅 2026-02-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work studies collaborative exploration by multiple agents in a reward-free setting to learn the dynamics of an unknown finite-horizon Markov decision process (MDP). The authors propose a phased collaborative exploration framework in which agents execute designated policies in parallel during each phase to collect trajectories. Theoretical analysis reveals a phase transition governed by the horizon length \(H\): when the number of phases is less than \(H\), the required number of agents grows exponentially. Under a tabular MDP model, they design a computationally efficient algorithm and establish an information-theoretic lower bound, showing that with exactly \(H\) phases, only \(\tilde{O}(S^6 H^6 A / \varepsilon^2)\) agents are sufficient to learn an \(\varepsilon\)-accurate dynamics model, enabling the synthesis of \(\varepsilon\)-optimal policies for any reward function.

Technology Category

Application Category

📝 Abstract
We study cooperative multi-agent reinforcement learning in the setting of reward-free exploration, where multiple agents jointly explore an unknown MDP in order to learn its dynamics (without observing rewards). We focus on a tabular finite-horizon MDP and adopt a phased learning framework. In each learning phase, multiple agents independently interact with the environment. More specifically, in each learning phase, each agent is assigned a policy, executes it, and observes the resulting trajectory. Our primary goal is to characterize the tradeoff between the number of learning phases and the number of agents, especially when the number of learning phases is small. Our results identify a sharp transition governed by the horizon $H$. When the number of learning phases equals $H$, we present a computationally efficient algorithm that uses only $\tilde{O}(S^6 H^6 A / \epsilon^2)$ agents to obtain an $\epsilon$ approximation of the dynamics (i.e., yields an $\epsilon$-optimal policy for any reward function). We complement our algorithm with a lower bound showing that any algorithm restricted to $\rho<H$ phases requires at least $A^{H/\rho}$ agents to achieve constant accuracy. Thus, we show that it is essential to have an order of $H$ learning phases if we limit the number of agents to be polynomial.
Problem

Research questions and friction points this paper is trying to address.

multi-agent reinforcement learning
reward-free exploration
Markov decision process
sample complexity
cooperative exploration
Innovation

Methods, ideas, or system contributions that make the work stand out.

reward-free exploration
multi-agent reinforcement learning
sample complexity
learning phases
MDP dynamics
🔎 Similar Papers
No similar papers found.
I
Idan Barnea
Blavatnik School of Computer Science and AI, Tel Aviv University, Israel
O
Orin Levy
Blavatnik School of Computer Science and AI, Tel Aviv University, Israel
Yishay Mansour
Yishay Mansour
Tel Aviv University
machine learningreinforcement learningalgorithmic game theory