🤖 AI Summary
Computing the Banzhaf value exactly in network flow games incurs exponential complexity—O(2^m)—rendering it intractable for large-scale or dynamic multi-agent systems. To address this, we propose the first end-to-end learning framework based on graph neural networks (GAT, GINE, EdgeConv) that approximates the Banzhaf value as a graph-level regression task, directly learning node influence from network topology and control structure. Our method enables zero-shot generalization across unseen network architectures, overcoming key limitations of conventional Monte Carlo sampling—namely, poor transferability and low sample efficiency. Evaluated on large-scale synthetic benchmarks, our model achieves high approximation accuracy, accelerates inference by 3–4 orders of magnitude over exact computation, and demonstrates robust generalization to previously unobserved graph structures.
📝 Abstract
Computing the Banzhaf value in network flow games is fundamental for quantifying agent influence in multi-agent systems, with applications ranging from cybersecurity to infrastructure planning. However, exact computation is intractable for systems with more than $sim20$ agents due to exponential complexity $mathcal{O}(2^m)$. While Monte Carlo sampling methods provide statistical estimates, they suffer from high sample complexity and cannot transfer knowledge across different network configurations, making them impractical for large-scale or dynamic systems. We present a novel learning-based approach using Graph Neural Networks (GNNs) to approximate Banzhaf values in cardinal network flow games. By framing the problem as a graph-level prediction task, our method learns generalisable patterns of agent influence directly from network topology and control structure. We conduct a comprehensive empirical study comparing three state-of-the-art GNN architectures-Graph Attention Networks (GAT), Graph Isomorphism Networks with Edge features (GINE), and EdgeConv-on a large-scale synthetic dataset of 200,000 graphs per configuration, varying in size (20-100 nodes), agent count (5-20), and edge probability (0.5-1.0). Our results demonstrate that trained GNN models achieve high-fidelity Banzhaf value approximation with order-of-magnitude speedups compared to exact and sampling-based methods. Most significantly, we show strong zero-shot generalisation: models trained on graphs of a specific size and topology accurately predict Banzhaf values for entirely new networks with different structural properties, without requiring retraining. This work establishes GNNs as a practical tool for scalable cooperative game-theoretic analysis of complex networked systems.