🤖 AI Summary
This work investigates how fine-grained neural circuits implementing classical algorithms—such as Bellman-Ford—can be identified within graph neural networks (GNNs) to uncover their algorithmic alignment mechanisms. To this end, the study introduces, for the first time, attribution patching—a technique from mechanistic interpretability—into the domain of neural algorithmic reasoning, establishing an efficient circuit discovery toolbox tailored for GNNs. The proposed method successfully recovers high-fidelity neural circuits from trained GNNs, not only characterizing the dynamic formation and pruning of these circuits during training but also revealing patterns of circuit sharing and reuse across multiple tasks. These findings offer a novel perspective on how neural networks execute algorithmic computations, advancing our understanding of their internal mechanistic logic.
📝 Abstract
The recent field of neural algorithmic reasoning (NAR) studies the ability of graph neural networks (GNNs) to emulate classical algorithms like Bellman-Ford, a phenomenon known as algorithmic alignment. At the same time, recent advances in large language models (LLMs) have spawned the study of mechanistic interpretability, which aims to identify granular model components like circuits that perform specific computations. In this work, we introduce Mechanistic Interpretability for Neural Algorithmic Reasoning (MINAR), an efficient circuit discovery toolbox that adapts attribution patching methods from mechanistic interpretability to the GNN setting. We show through two case studies that MINAR recovers faithful neuron-level circuits from GNNs trained on algorithmic tasks. Our study sheds new light on the process of circuit formation and pruning during training, as well as giving new insight into how GNNs trained to perform multiple tasks in parallel reuse circuit components for related tasks. Our code is available at https://github.com/pnnl/MINAR.