Fully Byzantine-Resilient Distributed Multi-Agent Q-Learning

๐Ÿ“… 2026-04-03
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the challenge of achieving convergence to an optimal value function in distributed reinforcement learning under Byzantine edge attacks, where existing methods fail to ensure reliable multi-agent coordination. The authors propose a novel distributed Q-learning algorithm that incorporates a redundancy-based filtering mechanism leveraging two-hop neighborhood information to effectively detect and discard malicious messages. By establishing new graph-theoretic conditions for convergence and providing a constructive criterion verifiable in polynomial time, the method guarantees almost-sure convergence to the optimal value function in Byzantine environmentsโ€”a first in the field. Experimental results demonstrate that the proposed algorithm reliably converges to the optimal policy under attack, whereas all baseline approaches diverge or fail.
๐Ÿ“ Abstract
We study Byzantine-resilient distributed multi-agent reinforcement learning (MARL), where agents must collaboratively learn optimal value functions over a compromised communication network. Existing resilient MARL approaches typically guarantee almost sure convergence only to near-optimal value functions, or require restrictive assumptions to ensure convergence to optimal solution. As a result, agents may fail to learn the optimal policies under these methods. To address this, we propose a novel distributed Q-learning algorithm, under which all agents' value functions converge almost surely to the optimal value functions despite Byzantine edge attacks. The key idea is a redundancy-based filtering mechanism that leverages two-hop neighbor information to validate incoming messages, while preserving bidirectional information flow. We then introduce a new topological condition for the convergence of our algorithm, present a systematic method to construct such networks, and prove that this condition can be verified in polynomial time. We validate our results through simulations, showing that our method converges to the optimal solutions, whereas prior methods fail under Byzantine edge attacks.
Problem

Research questions and friction points this paper is trying to address.

Byzantine resilience
distributed multi-agent reinforcement learning
optimal value functions
Byzantine edge attacks
convergence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Byzantine resilience
distributed Q-learning
multi-agent reinforcement learning
redundancy-based filtering
topological condition
๐Ÿ”Ž Similar Papers
No similar papers found.