π€ AI Summary
This work identifies βover-dilutionββa phenomenon in message-passing neural networks (MPNNs) wherein node features and inter-node representations simultaneously weaken within a single message-passing layer, leading to severe degradation of individual node representations. We formally define over-dilution for the first time and decompose it into two quantifiable dilution factors: attribute-level and node-level, offering a novel theoretical lens for analyzing representation degradation. Building on this insight, we propose a Transformer-based dilution-mitigation module, compatible with mainstream MPNN architectures, which employs adaptive feature reweighting and structure-aware aggregation to suppress information attenuation. Extensive experiments across multiple benchmark graph learning tasks demonstrate significant performance improvements, validating the effectiveness of our approach. The source code and datasets are publicly available.
π Abstract
Message Passing Neural Networks (MPNNs) hold a key position in machine learning on graphs, but they struggle with unintended behaviors, such as over-smoothing and over-squashing, due to irregular data structures. The observation and formulation of these limitations have become foundational in constructing more informative graph representations. In this paper, we delve into the limitations of MPNNs, focusing on aspects that have previously been overlooked. Our observations reveal that even within a single layer, the information specific to an individual node can become significantly diluted. To delve into this phenomenon in depth, we present the concept of Over-dilution and formulate it with two dilution factors: intra-node dilution for attribute-level and inter-node dilution for node-level representations. We also introduce a transformer-based solution that alleviates over-dilution and complements existing node embedding methods like MPNNs. Our findings provide new insights and contribute to the development of informative representations. The implementation and supplementary materials are publicly available at https://github.com/LeeJunHyun/NATR.