🤖 AI Summary
This work systematically characterizes the expressive power of mean-aggregation Graph Neural Networks (Mean-GNNs). Addressing both non-uniform and uniform settings, it employs logical characterization techniques—leveraging Ratio Modal Logic (RML), alternation-free modal logic, and monadic second-order (MSO) logic—to establish precise equivalence results: Mean-GNNs are expressively equivalent to RML in the non-uniform setting, and to alternation-free modal logic (i.e., the quantifier-alternation-free fragment of MSO) in the uniform setting. This constitutes the first rigorous equivalence between Mean-GNNs and RML, and reveals that their expressive power lies strictly between max-aggregation GNNs (which capture only basic modal logic) and sum-aggregation GNNs (which reach full MSO). The analysis further demonstrates that continuity and threshold-classification assumptions are essential for delineating these expressive boundaries.
📝 Abstract
We study the expressive power of graph neural networks (GNNs) with mean as the aggregation function. In the non-uniform setting, we show that such GNNs have exactly the same expressive power as ratio modal logic, which has modal operators expressing that at least a certain ratio of the successors of a vertex satisfies a specified property. The non-uniform expressive power of mean GNNs is thus higher than that of GNNs with max aggregation, but lower than for sum aggregation--the latter are characterized by modal logic and graded modal logic, respectively. In the uniform setting, we show that the expressive power relative to MSO is exactly that of alternation-free modal logic, under the natural assumptions that combination functions are continuous and classification functions are thresholds. This implies that, relative to MSO and in the uniform setting, mean GNNs are strictly less expressive than sum GNNs and max GNNs. When any of the assumptions is dropped, the expressive power increases.