Wonder Wins Ways: Curiosity-Driven Exploration through Multi-Agent Contextual Calibration

📅 2025-09-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In multi-agent reinforcement learning (MARL) under sparse rewards, intrinsic curiosity mechanisms often conflate environmental stochasticity with task-relevant novelty and neglect semantic information in peer agents’ behaviors. To address this, we propose CERMIC—a novel framework featuring: (1) a context-calibration mechanism that dynamically modulates intrinsic rewards based on inferred peer behavior, the first of its kind in MARL; (2) a theoretically grounded intrinsic reward function maximizing information gain; and (3) a denoised surprise signal filtering technique to suppress spurious novelty. Evaluated across standard benchmarks—including VMAS, Melting Pot, and SMACv2—CERMIC consistently outperforms state-of-the-art methods, demonstrating significant improvements in exploration efficiency, robustness to environmental noise, and adaptability to sparse-reward settings.

Technology Category

Application Category

📝 Abstract
Autonomous exploration in complex multi-agent reinforcement learning (MARL) with sparse rewards critically depends on providing agents with effective intrinsic motivation. While artificial curiosity offers a powerful self-supervised signal, it often confuses environmental stochasticity with meaningful novelty. Moreover, existing curiosity mechanisms exhibit a uniform novelty bias, treating all unexpected observations equally. However, peer behavior novelty, which encode latent task dynamics, are often overlooked, resulting in suboptimal exploration in decentralized, communication-free MARL settings. To this end, inspired by how human children adaptively calibrate their own exploratory behaviors via observing peers, we propose a novel approach to enhance multi-agent exploration. We introduce CERMIC, a principled framework that empowers agents to robustly filter noisy surprise signals and guide exploration by dynamically calibrating their intrinsic curiosity with inferred multi-agent context. Additionally, CERMIC generates theoretically-grounded intrinsic rewards, encouraging agents to explore state transitions with high information gain. We evaluate CERMIC on benchmark suites including VMAS, Meltingpot, and SMACv2. Empirical results demonstrate that exploration with CERMIC significantly outperforms SoTA algorithms in sparse-reward environments.
Problem

Research questions and friction points this paper is trying to address.

Addresses sparse reward challenges in multi-agent reinforcement learning exploration
Filters noisy surprise signals from environmental stochasticity versus meaningful novelty
Overcomes uniform novelty bias by incorporating peer behavior context
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic calibration of intrinsic curiosity with multi-agent context
Robust filtering of noisy surprise signals for exploration
Generation of theoretically-grounded intrinsic rewards for information gain
🔎 Similar Papers
No similar papers found.
Yiyuan Pan
Yiyuan Pan
Carnegie Mellon University
Robot LearningMultimodal LearningReinforcement Learning
Z
Zhe Liu
Shanghai Jiao Tong University
H
Hesheng Wang
Shanghai Jiao Tong University