🤖 AI Summary
To address inefficient signal propagation and over-squashing in graph neural networks—caused by flat or ill-conditioned energy landscapes—this paper proposes TANGO, a dynamical-systems-based framework for graph representation learning. Methodologically, TANGO decouples node feature evolution into two orthogonal components: (i) a convergent component that descends along the energy gradient, governed by a learnable Lyapunov function defining the energy landscape; and (ii) an expressive component that flows tangentially along level sets of the energy function, modeled via message passing. This design ensures dynamic stability while enhancing modeling flexibility and mitigating over-squashing. Empirically, TANGO achieves state-of-the-art performance across diverse node-level and graph-level classification and regression benchmarks, validating the effectiveness and strong generalization capability of jointly learning the energy function and tangential flow.
📝 Abstract
We introduce TANGO -- a dynamical systems inspired framework for graph representation learning that governs node feature evolution through a learned energy landscape and its associated descent dynamics. At the core of our approach is a learnable Lyapunov function over node embeddings, whose gradient defines an energy-reducing direction that guarantees convergence and stability. To enhance flexibility while preserving the benefits of energy-based dynamics, we incorporate a novel tangential component, learned via message passing, that evolves features while maintaining the energy value. This decomposition into orthogonal flows of energy gradient descent and tangential evolution yields a flexible form of graph dynamics, and enables effective signal propagation even in flat or ill-conditioned energy regions, that often appear in graph learning. Our method mitigates oversquashing and is compatible with different graph neural network backbones. Empirically, TANGO achieves strong performance across a diverse set of node and graph classification and regression benchmarks, demonstrating the effectiveness of jointly learned energy functions and tangential flows for graph neural networks.