Understanding Oversmoothing in GNNs as Consensus in Opinion Dynamics

📅 2025-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Over-smoothing in Graph Neural Networks (GNNs) arises from deep message passing, causing node representations to converge and lose discriminative power. This work establishes, for the first time, a theoretical correspondence between GNN over-smoothing and linear consensus processes in opinion dynamics, proving that continuous-depth GNN message passing is equivalent to a linear system converging to consensus—and thus inherently prone to over-smoothing. To address this, we propose Behavior-Inspired Message Passing (BIMP), a differentiable and stable message aggregation mechanism grounded in nonlinear opinion dynamics, which theoretically avoids over-smoothing. Our theoretical analysis guarantees the existence and robustness of BIMP’s fixed points. Empirical evaluation demonstrates that BIMP significantly outperforms state-of-the-art GNNs across multiple benchmark datasets, while exhibiting strong robustness against both over-smoothing and adversarial attacks.

Technology Category

Application Category

📝 Abstract
In contrast to classes of neural networks where the learned representations become increasingly expressive with network depth, the learned representations in graph neural networks (GNNs), tend to become increasingly similar. This phenomena, known as oversmoothing, is characterized by learned representations that cannot be reliably differentiated leading to reduced predictive performance. In this paper, we propose an analogy between oversmoothing in GNNs and consensus or agreement in opinion dynamics. Through this analogy, we show that the message passing structure of recent continuous-depth GNNs is equivalent to a special case of opinion dynamics (i.e., linear consensus models) which has been theoretically proven to converge to consensus (i.e., oversmoothing) for all inputs. Using the understanding developed through this analogy, we design a new continuous-depth GNN model based on nonlinear opinion dynamics and prove that our model, which we call behavior-inspired message passing neural network (BIMP) circumvents oversmoothing for general inputs. Through extensive experiments, we show that BIMP is robust to oversmoothing and adversarial attack, and consistently outperforms competitive baselines on numerous benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Over-smoothing
Graph Neural Networks
Feature Similarity
Innovation

Methods, ideas, or system contributions that make the work stand out.

BIMP
Over-smoothing Problem
Robustness
🔎 Similar Papers
No similar papers found.
K
Keqin Wang
Mechanical and Aerospace Department, Princeton University, Princeton, United States
Yulong Yang
Yulong Yang
Princeton University
Dynamics and ControlPhysics Guided Deep Learning
I
Ishan Saha
Electrical and Computer Engineering Department, Princeton University, Princeton, United States
Christine Allen-Blanchette
Christine Allen-Blanchette
Assistant Professor, Princeton University
Computer vision