Approximate Feedback Nash Equilibria with Sparse Inter-Agent Dependencies

📅 2024-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high cost and poor robustness of full-state feedback in multi-agent dynamic games, this paper proposes a regularized dynamic programming framework that learns sparse Nash feedback policies depending only on subsets of agent states. Methodologically, it innovatively introduces adaptive group Lasso regularization into game-theoretic policy learning for the first time. Theoretical analysis establishes asymptotic convergence of the learned sparse policies to a neighborhood of the Nash equilibrium, with extension to nonlinear, non-quadratic games. The approach integrates convex optimization, iterative linearization, and linear-quadratic game theory. In multi-robot simulations, the proposed method reduces coupled costs by up to 77% under state observation noise compared to standard Nash strategies, while enabling controllable sparsity tuning.

Technology Category

Application Category

📝 Abstract
Feedback Nash equilibrium strategies in multi-agent dynamic games require availability of all players' state information to compute control actions. However, in real-world scenarios, sensing and communication limitations between agents make full state feedback expensive or impractical, and such strategies can become fragile when state information from other agents is inaccurate. To this end, we propose a regularized dynamic programming approach for finding sparse feedback policies that selectively depend on the states of a subset of agents in dynamic games. The proposed approach solves convex adaptive group Lasso problems to compute sparse policies approximating Nash equilibrium solutions. We prove the regularized solutions' asymptotic convergence to a neighborhood of Nash equilibrium policies in linear-quadratic (LQ) games. Further, we extend the proposed approach to general non-LQ games via an iterative algorithm. Simulation results in multi-robot interaction scenarios show that the proposed approach effectively computes feedback policies with varying sparsity levels. When agents have noisy observations of other agents' states, simulation results indicate that the proposed regularized policies consistently achieve lower costs than standard Nash equilibrium policies by up to 77% for all interacting agents whose costs are coupled with other agents' states.
Problem

Research questions and friction points this paper is trying to address.

Finding sparse feedback policies in dynamic games
Reducing dependency on full state information
Approximating Nash equilibria with noisy observations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Regularized dynamic programming for sparse policies
Convex adaptive group Lasso for Nash approximation
Asymptotic convergence in linear-quadratic games
🔎 Similar Papers
No similar papers found.