Lyapunov Stable Graph Neural Flow

📅 2026-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Graph neural networks are vulnerable to adversarial perturbations in both graph topology and node features, exhibiting insufficient robustness. To address this limitation, this work proposes a graph neural flow framework grounded in control theory, incorporating integer- and fractional-order Lyapunov stability constraints. By introducing a learnable adaptive Lyapunov function together with a novel projection mechanism, the method maps network dynamics into a provably stable region of the state space, thereby achieving theoretically guaranteed robustness. The proposed approach is orthogonally compatible with existing defense strategies and demonstrates significant performance gains over both baseline models and current state-of-the-art methods across multiple benchmark datasets and diverse attack scenarios.

Technology Category

Application Category

📝 Abstract
Graph Neural Networks (GNNs) are highly vulnerable to adversarial perturbations in both topology and features, making the learning of robust representations a critical challenge. In this work, we bridge GNNs with control theory to introduce a novel defense framework grounded in integer- and fractional-order Lyapunov stability. Unlike conventional strategies that rely on resource-heavy adversarial training or data purification, our approach fundamentally constrains the underlying feature-update dynamics of the GNN. We propose an adaptive, learnable Lyapunov function paired with a novel projection mechanism that maps the network's state into a stable space, thereby offering theoretically provable stability guarantees. Notably, this mechanism is orthogonal to existing defenses, allowing for seamless integration with techniques like adversarial training to achieve cumulative robustness. Extensive experiments demonstrate that our Lyapunov-stable graph neural flows substantially outperform base neural flows and state-of-the-art baselines across standard benchmarks and various adversarial attack scenarios.
Problem

Research questions and friction points this paper is trying to address.

Graph Neural Networks
adversarial perturbations
robust representations
Lyapunov stability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lyapunov stability
Graph Neural Networks
adversarial robustness
neural flows
control theory
🔎 Similar Papers
No similar papers found.
H
Haoyu Chu
School of Computer Science and Technology / School of Artificial Intelligence, China University of Mining and Technology, Xuzhou 221008, China
X
Xiaotong Chen
W
Wei Zhou
Wenjun Cui
Wenjun Cui
CIWRO/University of Oklahoma, NOAA/OAR/NSSL
Atmospheric Sciences
K
Kai Zhao
S
Shikui Wei
Qiyu Kang
Qiyu Kang
University of Science and Technology of China
Machine LearningComputational IntelligenceAI for ScienceDynamical Systems