🤖 AI Summary
This paper addresses the efficient computation of Nash equilibria in graph-structured stochastic differential games (SDGs). Existing methods suffer from parameter redundancy, poor interpretability, and weak stability on large-scale sparse graphs. To overcome these limitations, we propose a non-trainable modification (NTM) neural architecture that explicitly encodes graph topological priors into fixed, sparse, non-trainable modules—ensuring structural alignment and substantial parameter compression. We instantiate NTM within two solver frameworks: NTM-DP, integrated into the direct parameterization paradigm, and NTM-DBSDE, embedded in the deep backward stochastic differential equation (BSDE) framework. Experiments across three canonical graph SDG benchmarks demonstrate that NTM achieves comparable accuracy and robustness to fully parameterized models while reducing trainable parameters by up to ~70%. Moreover, NTM delivers superior computational efficiency and enhanced interpretability, offering a principled trade-off between expressivity, efficiency, and transparency in graph-structured SDG learning.
📝 Abstract
We propose a novel neural network architecture, called Non-Trainable Modification (NTM), for computing Nash equilibria in stochastic differential games (SDGs) on graphs. These games model a broad class of graph-structured multi-agent systems arising in finance, robotics, energy, and social dynamics, where agents interact locally under uncertainty. The NTM architecture imposes a graph-guided sparsification on feedforward neural networks, embedding fixed, non-trainable components aligned with the underlying graph topology. This design enhances interpretability and stability, while significantly reducing the number of trainable parameters in large-scale, sparse settings. We theoretically establish a universal approximation property for NTM in static games on graphs and numerically validate its expressivity and robustness through supervised learning tasks. Building on this foundation, we incorporate NTM into two state-of-the-art game solvers, Direct Parameterization and Deep BSDE, yielding their sparse variants (NTM-DP and NTM-DBSDE). Numerical experiments on three SDGs across various graph structures demonstrate that NTM-based methods achieve performance comparable to their fully trainable counterparts, while offering improved computational efficiency.