MINAR: Mechanistic Interpretability for Neural Algorithmic Reasoning

📅 2026-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates how fine-grained neural circuits implementing classical algorithms—such as Bellman-Ford—can be identified within graph neural networks (GNNs) to uncover their algorithmic alignment mechanisms. To this end, the study introduces, for the first time, attribution patching—a technique from mechanistic interpretability—into the domain of neural algorithmic reasoning, establishing an efficient circuit discovery toolbox tailored for GNNs. The proposed method successfully recovers high-fidelity neural circuits from trained GNNs, not only characterizing the dynamic formation and pruning of these circuits during training but also revealing patterns of circuit sharing and reuse across multiple tasks. These findings offer a novel perspective on how neural networks execute algorithmic computations, advancing our understanding of their internal mechanistic logic.

Technology Category

Application Category

📝 Abstract
The recent field of neural algorithmic reasoning (NAR) studies the ability of graph neural networks (GNNs) to emulate classical algorithms like Bellman-Ford, a phenomenon known as algorithmic alignment. At the same time, recent advances in large language models (LLMs) have spawned the study of mechanistic interpretability, which aims to identify granular model components like circuits that perform specific computations. In this work, we introduce Mechanistic Interpretability for Neural Algorithmic Reasoning (MINAR), an efficient circuit discovery toolbox that adapts attribution patching methods from mechanistic interpretability to the GNN setting. We show through two case studies that MINAR recovers faithful neuron-level circuits from GNNs trained on algorithmic tasks. Our study sheds new light on the process of circuit formation and pruning during training, as well as giving new insight into how GNNs trained to perform multiple tasks in parallel reuse circuit components for related tasks. Our code is available at https://github.com/pnnl/MINAR.
Problem

Research questions and friction points this paper is trying to address.

neural algorithmic reasoning
mechanistic interpretability
graph neural networks
algorithmic alignment
circuit discovery
Innovation

Methods, ideas, or system contributions that make the work stand out.

mechanistic interpretability
neural algorithmic reasoning
graph neural networks
circuit discovery
attribution patching
🔎 Similar Papers
No similar papers found.