Aggregation Buffer: Revisiting DropEdge with a New Parameter Block

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
DropEdge alleviates overfitting in Graph Neural Networks (GNNs) via random edge dropout, yet delivers limited performance gains in supervised learning. This work theoretically identifies the root cause: inherent degree bias and structural imbalance in GNN aggregation mechanisms, which undermine robustness to edge perturbations. To address this, we propose Aggregation Buffer (AB)—a lightweight, learnable, plug-and-play module that introduces buffered aggregation parameters without altering network architecture or training procedures. AB jointly models neighbor importance and structural bias, and is compatible with mainstream GNNs (e.g., GCN, GAT). Extensive experiments on benchmark datasets demonstrate consistent improvements in node classification accuracy and significantly enhanced robustness against edge perturbations and structural noise. Code and data are publicly available.

Technology Category

Application Category

📝 Abstract
We revisit DropEdge, a data augmentation technique for GNNs which randomly removes edges to expose diverse graph structures during training. While being a promising approach to effectively reduce overfitting on specific connections in the graph, we observe that its potential performance gain in supervised learning tasks is significantly limited. To understand why, we provide a theoretical analysis showing that the limited performance of DropEdge comes from the fundamental limitation that exists in many GNN architectures. Based on this analysis, we propose Aggregation Buffer, a parameter block specifically designed to improve the robustness of GNNs by addressing the limitation of DropEdge. Our method is compatible with any GNN model, and shows consistent performance improvements on multiple datasets. Moreover, our method effectively addresses well-known problems such as degree bias or structural disparity as a unifying solution. Code and datasets are available at https://github.com/dooho00/agg-buffer.
Problem

Research questions and friction points this paper is trying to address.

Improving GNN robustness by addressing DropEdge limitations
Enhancing performance in supervised graph learning tasks
Mitigating degree bias and structural disparity in GNNs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces Aggregation Buffer parameter block
Improves GNN robustness over DropEdge
Addresses degree bias and structural disparity
🔎 Similar Papers
No similar papers found.
D
Dooho Lee
School of Electrical Engineering, KAIST, Daejeon, Republic of Korea
M
Myeong Kong
School of Electrical Engineering, KAIST, Daejeon, Republic of Korea
S
Sagad Hamid
School of Electrical Engineering, KAIST, Daejeon, Republic of Korea; Computer Science Department, University of Münster, Münster, Germany
C
Cheonwoo Lee
School of Electrical Engineering, KAIST, Daejeon, Republic of Korea
Jaemin Yoo
Jaemin Yoo
Assistant Professor, KAIST
Data MiningMachine LearningGraph Neural NetworksTime Series Analysis