Personalized Multi-Agent Average Reward TD-Learning via Joint Linear Approximation

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of signal conflict arising from environmental heterogeneity in multi-agent reinforcement learning, which impedes the convergence and efficiency of average-reward temporal difference (TD) algorithms. To mitigate this issue, the paper introduces personalized federated learning into the average-reward setting for the first time and proposes a single-timescale collaborative TD algorithm. The method models each agent’s optimal value function weights as vectors lying in an unknown shared linear subspace and jointly estimates this common representation along with agent-specific head parameters. Theoretical analysis establishes the algorithm’s convergence and demonstrates linear speedup with respect to the number of agents. Empirical results confirm that the proposed approach effectively alleviates heterogeneity-induced interference and significantly improves performance on multi-agent control tasks.

Technology Category

Application Category

📝 Abstract
We study personalized multi-agent average reward TD learning, in which a collection of agents interacts with different environments and jointly learns their respective value functions. We focus on the setting where there exists a shared linear representation, and the agents' optimal weights collectively lie in an unknown linear subspace. Inspired by the recent success of personalized federated learning (PFL), we study the convergence of cooperative single-timescale TD learning in which agents iteratively estimate the common subspace and local heads. We showed that this decomposition can filter out conflicting signals, effectively mitigating the negative impacts of ``misaligned'' signals, and achieving linear speedup. The main technical challenges lie in the heterogeneity, the Markovian sampling, and their intricate interplay in shaping error evolutions. Specifically, not only are the error dynamics of multiple variables closely interconnected, but there is also no direct contraction for the principal angle distance between the optimal subspace and the estimated subspace. We hope our analytical techniques can be useful to inspire research on deeper exploration into leveraging common structures. Experiments are provided to show the benefits of learning via a shared structure to the more general control problem.
Problem

Research questions and friction points this paper is trying to address.

personalized multi-agent reinforcement learning
average reward TD-learning
heterogeneity
Markovian sampling
shared linear representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

personalized federated learning
multi-agent TD learning
shared linear representation
subspace estimation
Markovian sampling
🔎 Similar Papers
No similar papers found.