🤖 AI Summary
This work addresses the problem of establishing a computable translation between the μ-fragment of graded modal μ-calculus and distributed message-passing automata, while clarifying their expressive relationships with recurrent Graph Neural Networks (GNNs) over the reals. Methodologically, the paper constructs, for the first time, a rigorous correspondence between a fragment of modal μ-calculus—specifically graded modal substitution calculus—and distributed automata. It then proves that, when restricted to monadic second-order logic (MSO), this calculus is expressively equivalent to real-valued recurrent GNNs. The key contribution is a constructive proof of this equivalence, replacing prior non-constructive arguments and thereby enhancing interpretability and implementability. This result establishes a novel theoretical bridge between GNNs and modal logic, advancing foundational understanding of the logical expressivity of graph neural architectures.
📝 Abstract
This paper gives a translation from the $mu$-fragment of the graded modal $mu$-calculus to a class of distributed message-passing automata. As a corollary, we obtain an alternative proof for a theorem from cite{ahvonen_neurips} stating that recurrent graph neural networks working with reals and graded modal substitution calculus have the same expressive power in restriction to the logic monadic second-order logic MSO.