🤖 AI Summary
This work addresses the problem of computing Nash equilibria in offline multi-agent reinforcement learning (PbMARL) using only sparse preference feedback. Recognizing that the conventional single-policy coverage assumption is inadequate for multi-agent preferences, we establish the first theoretical framework for PbMARL and introduce a more realistic unilateral data coverage condition. Methodologically, we propose two novel algorithmic techniques: temporal mean-squared-error regularization and distribution-aware pessimistic penalization. Our theoretical analysis derives an upper bound on the sample complexity for provably efficient PbMARL. Empirical evaluation validates the necessity of the unilateral coverage assumption and demonstrates that our approach significantly improves reward modeling stability and accelerates convergence to Nash equilibria.
📝 Abstract
We initiate the study of Preference-Based Multi-Agent Reinforcement Learning (PbMARL), exploring both theoretical foundations and empirical validations. We define the task as identifying the Nash equilibrium from a preference-only offline dataset in general-sum games, a problem marked by the challenge of sparse feedback signals. Our theory establishes the upper complexity bounds for Nash Equilibrium in effective PbMARL, demonstrating that single-policy coverage is inadequate and highlighting the importance of unilateral dataset coverage. These theoretical insights are verified through comprehensive experiments. To enhance the practical performance, we further introduce two algorithmic techniques. (1) We propose a Mean Squared Error (MSE) regularization along the time axis to achieve a more uniform reward distribution and improve reward learning outcomes. (2) We propose an additional penalty based on the distribution of the dataset to incorporate pessimism, improving stability and effectiveness during training. Our findings underscore the multifaceted approach required for PbMARL, paving the way for effective preference-based multi-agent systems.