🤖 AI Summary
This work addresses the limitation of existing graph neural networks (GNNs) in performing global multi-step relational reasoning due to their reliance on local message passing. To overcome this, the authors propose FloydNet, a novel architecture that integrates dynamic programming principles into graph learning. FloydNet maintains a global all-pairs relational tensor and employs a learnable generalized dynamic programming operator to iteratively refine global states, thereby enabling complex reasoning. The method establishes a task-specific relational calculus framework that transcends the locality bottleneck of conventional GNNs, achieving theoretical expressiveness equivalent to 3-WL (i.e., 2-FWL). Empirical results demonstrate that FloydNet attains over 99% accuracy on the CLRS-30 algorithmic benchmark, significantly outperforms heuristic approaches in solving general TSP instances to optimality, and fully matches the discriminative power of 3-WL on the BREC benchmark.
📝 Abstract
Developing models capable of complex, multi-step reasoning is a central goal in artificial intelligence. While representing problems as graphs is a powerful approach, Graph Neural Networks (GNNs) are fundamentally constrained by their message-passing mechanism, which imposes a local bottleneck that limits global, holistic reasoning. We argue that dynamic programming (DP), which solves problems by iteratively refining a global state, offers a more powerful and suitable learning paradigm. We introduce FloydNet, a new architecture that embodies this principle. In contrast to local message passing, FloydNet maintains a global, all-pairs relationship tensor and learns a generalized DP operator to progressively refine it. This enables the model to develop a task-specific relational calculus, providing a principled framework for capturing long-range dependencies. Theoretically, we prove that FloydNet achieves 3-WL (2-FWL) expressive power, and its generalized form aligns with the k-FWL hierarchy. FloydNet demonstrates state-of-the-art performance across challenging domains: it achieves near-perfect scores (often>99\%) on the CLRS-30 algorithmic benchmark, finds exact optimal solutions for the general Traveling Salesman Problem (TSP) at rates significantly exceeding strong heuristics, and empirically matches the 3-WL test on the BREC benchmark. Our results establish this learned, DP-style refinement as a powerful and practical alternative to message passing for high-level graph reasoning.