🤖 AI Summary
Existing graph neural networks (GNNs) neglect fundamental conservation laws—such as Kirchhoff’s current law—innate to physical flow graphs (e.g., power grids, transportation networks), leading to modeling distortion and limited representational capacity. To address this, we propose a conservation-aware flow attention mechanism that explicitly enforces inflow-outflow balance constraints during attention weight computation, thereby enabling physically consistent message passing. This mechanism endows GNNs with the ability to distinguish non-isomorphic flow graphs indistinguishable by standard GNNs, substantially enhancing structural expressivity. Experiments on circuit and power grid benchmarks demonstrate that our method achieves significant performance gains over state-of-the-art GNNs in both graph-level classification and regression tasks. These results validate the effectiveness and necessity of embedding domain-specific physical conservation principles directly into GNN architectures for modeling real-world flow graphs.
📝 Abstract
Graph Neural Networks (GNNs) have become essential for learning from graph-structured data. However, existing GNNs do not consider the conservation law inherent in graphs associated with a flow of physical resources, such as electrical current in power grids or traffic in transportation networks, which can lead to reduced model performance. To address this, we propose flow attention, which adapts existing graph attention mechanisms to satisfy Kirchhoff's first law. Furthermore, we discuss how this modification influences the expressivity and identify sets of non-isomorphic graphs that can be discriminated by flow attention but not by standard attention. Through extensive experiments on two flow graph datasets (electronic circuits and power grids), we demonstrate that flow attention enhances the performance of attention-based GNNs on both graph-level classification and regression tasks.