Relu and softplus neural nets as zero-sum turn-based games

📅 2025-12-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Neural network interpretability, robustness verification, and training lack a unified theoretical foundation grounded in classical decision-theoretic frameworks. Method: This paper establishes a rigorous correspondence between ReLU/Softplus neural networks and zero-sum turn-based stopping games: network outputs are interpreted as value functions; forward propagation implements the Shapley–Bellman recursion; backpropagation corresponds to policy optimization; inputs encode terminal rewards; and outputs admit a discrete Feynman–Kac path integral representation with respect to strategy measures. Strategy policies are explicitly modeled as robustness certificates, enabling efficient output bound estimation from input bounds and formal robustness verification. Training is reformulated as an inverse game problem, extended to entropy-regularized games under Softplus activation. Contribution/Results: The work unifies game theory, dynamic programming, and path integral perspectives, yielding a principled, interpretable framework for neural networks—with theoretical guarantees, formal verification tools, and a novel game-theoretic training paradigm.

Technology Category

Application Category

📝 Abstract
We show that the output of a ReLU neural network can be interpreted as the value of a zero-sum, turn-based, stopping game, which we call the ReLU net game. The game runs in the direction opposite to that of the network, and the input of the network serves as the terminal reward of the game. In fact, evaluating the network is the same as running the Shapley-Bellman backward recursion for the value of the game. Using the expression of the value of the game as an expected total payoff with respect to the path measure induced by the transition probabilities and a pair of optimal policies, we derive a discrete Feynman-Kac-type path-integral formula for the network output. This game-theoretic representation can be used to derive bounds on the output from bounds on the input, leveraging the monotonicity of Shapley operators, and to verify robustness properties using policies as certificates. Moreover, training the neural network becomes an inverse game problem: given pairs of terminal rewards and corresponding values, one seeks transition probabilities and rewards of a game that reproduces them. Finally, we show that a similar approach applies to neural networks with Softplus activation functions, where the ReLU net game is replaced by its entropic regularization.
Problem

Research questions and friction points this paper is trying to address.

Interpreting ReLU neural network outputs as zero-sum game values
Deriving path-integral formulas for network outputs via game theory
Applying game-theoretic representation to bound outputs and verify robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

ReLU networks as zero-sum turn-based stopping games
Game-theoretic representation enables robustness verification via policies
Training becomes inverse game problem for transition probabilities
🔎 Similar Papers
No similar papers found.