🤖 AI Summary
Graph neural networks are vulnerable to adversarial perturbations in both graph topology and node features, exhibiting insufficient robustness. To address this limitation, this work proposes a graph neural flow framework grounded in control theory, incorporating integer- and fractional-order Lyapunov stability constraints. By introducing a learnable adaptive Lyapunov function together with a novel projection mechanism, the method maps network dynamics into a provably stable region of the state space, thereby achieving theoretically guaranteed robustness. The proposed approach is orthogonally compatible with existing defense strategies and demonstrates significant performance gains over both baseline models and current state-of-the-art methods across multiple benchmark datasets and diverse attack scenarios.
📝 Abstract
Graph Neural Networks (GNNs) are highly vulnerable to adversarial perturbations in both topology and features, making the learning of robust representations a critical challenge. In this work, we bridge GNNs with control theory to introduce a novel defense framework grounded in integer- and fractional-order Lyapunov stability. Unlike conventional strategies that rely on resource-heavy adversarial training or data purification, our approach fundamentally constrains the underlying feature-update dynamics of the GNN. We propose an adaptive, learnable Lyapunov function paired with a novel projection mechanism that maps the network's state into a stable space, thereby offering theoretically provable stability guarantees. Notably, this mechanism is orthogonal to existing defenses, allowing for seamless integration with techniques like adversarial training to achieve cumulative robustness. Extensive experiments demonstrate that our Lyapunov-stable graph neural flows substantially outperform base neural flows and state-of-the-art baselines across standard benchmarks and various adversarial attack scenarios.