🤖 AI Summary
Existing graph compression (GC) methods suffer significant performance degradation under adversarial perturbations, and mainstream robust graph learning techniques provide limited mitigation. Method: We propose Manifold-Regularized Robust Graph Compression (MRGC), the first GC framework grounded in geometric manifold theory. MRGC identifies the root cause of GC vulnerability: adversarial-induced deviations of compressed graphs from the original data’s low-dimensional smooth manifold, leading to unstable classification complexity. To address this, MRGC jointly optimizes graph structure reconstruction, classification complexity control, and manifold regularization—thereby constraining compressed graph embeddings to reside stably on the intrinsic manifold. Contribution/Results: Extensive experiments demonstrate that MRGC consistently outperforms state-of-the-art methods across multiple benchmark datasets and adversarial attack scenarios, achieving superior trade-offs among compression efficiency, classification stability, and generalization robustness.
📝 Abstract
Graph condensation (GC) has gained significant attention for its ability to synthesize smaller yet informative graphs. However, existing studies often overlook the robustness of GC in scenarios where the original graph is corrupted. In such cases, we observe that the performance of GC deteriorates significantly, while existing robust graph learning technologies offer only limited effectiveness. Through both empirical investigation and theoretical analysis, we reveal that GC is inherently an intrinsic-dimension-reducing process, synthesizing a condensed graph with lower classification complexity. Although this property is critical for effective GC performance, it remains highly vulnerable to adversarial perturbations. To tackle this vulnerability and improve GC robustness, we adopt the geometry perspective of graph data manifold and propose a novel Manifold-constrained Robust Graph Condensation framework named MRGC. Specifically, we introduce three graph data manifold learning modules that guide the condensed graph to lie within a smooth, low-dimensional manifold with minimal class ambiguity, thereby preserving the classification complexity reduction capability of GC and ensuring robust performance under universal adversarial attacks. Extensive experiments demonstrate the robustness of ModelName across diverse attack scenarios.