Preference-Guided Learning for Sparse-Reward Multi-Agent Reinforcement Learning

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Online multi-agent reinforcement learning (MARL) under sparse rewards—where agents receive scalar feedback only upon episode termination—suffers from insufficient intermediate supervision for policy learning. Method: We propose the first unified framework integrating inverse preference learning with value decomposition. Our approach (1) constructs an implicit multi-agent reward model grounded in pairwise preferences, generating both global and local advantage signals; (2) designs a preference-guided value decomposition network that decouples centralized critic training from decentralized actor optimization; and (3) leverages large language models to synthesize high-quality preference labels, enhancing reward modeling robustness. Results: The method achieves significant improvements over state-of-the-art baselines on MAMuJoCo and SMACv2 benchmarks, demonstrating strong effectiveness and generalization in realistic online MARL settings with sparse rewards.

Technology Category

Application Category

📝 Abstract
We study the problem of online multi-agent reinforcement learning (MARL) in environments with sparse rewards, where reward feedback is not provided at each interaction but only revealed at the end of a trajectory. This setting, though realistic, presents a fundamental challenge: the lack of intermediate rewards hinders standard MARL algorithms from effectively guiding policy learning. To address this issue, we propose a novel framework that integrates online inverse preference learning with multi-agent on-policy optimization into a unified architecture. At its core, our approach introduces an implicit multi-agent reward learning model, built upon a preference-based value-decomposition network, which produces both global and local reward signals. These signals are further used to construct dual advantage streams, enabling differentiated learning targets for the centralized critic and decentralized actors. In addition, we demonstrate how large language models (LLMs) can be leveraged to provide preference labels that enhance the quality of the learned reward model. Empirical evaluations on state-of-the-art benchmarks, including MAMuJoCo and SMACv2, show that our method achieves superior performance compared to existing baselines, highlighting its effectiveness in addressing sparse-reward challenges in online MARL.
Problem

Research questions and friction points this paper is trying to address.

Addressing sparse-reward challenges in multi-agent reinforcement learning
Integrating preference learning with multi-agent policy optimization
Enhancing reward models using large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates online inverse preference learning with MARL
Uses preference-based value-decomposition for reward signals
Leverages LLMs to enhance reward model quality
🔎 Similar Papers
No similar papers found.