Mind The Gap: Deep Learning Doesn't Learn Deeply

📅 2025-05-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the expressivity–trainability gap in neural networks—particularly graph neural networks (GNNs)—when learning classical graph algorithms (e.g., BFS, DFS, Bellman–Ford). The central question is whether learned models faithfully implement the target algorithm’s logic when they empirically succeed, and why most sequential algorithms resist end-to-end learning. To address this, we introduce *neural compilation* for GNNs—a novel approach that bypasses gradient-based training by directly constructing network weights that provably execute the target algorithm step-for-step. Empirical analysis reveals three key findings: (i) only NC-class parallel algorithms are reliably learnable; (ii) canonical sequential algorithms exhibit a fundamental expressivity–trainability gap; and (iii) compiled models serve not only as interpretable, verifiable baselines but also expose intrinsic limitations of inductive learning beyond NC. Our framework establishes a new foundation for robust, formally grounded algorithmic learning.

Technology Category

Application Category

📝 Abstract
This paper aims to understand how neural networks learn algorithmic reasoning by addressing two questions: How faithful are learned algorithms when they are effective, and why do neural networks fail to learn effective algorithms otherwise? To answer these questions, we use neural compilation, a technique that directly encodes a source algorithm into neural network parameters, enabling the network to compute the algorithm exactly. This enables comparison between compiled and conventionally learned parameters, intermediate vectors, and behaviors. This investigation is crucial for developing neural networks that robustly learn complexalgorithms from data. Our analysis focuses on graph neural networks (GNNs), which are naturally aligned with algorithmic reasoning tasks, specifically our choices of BFS, DFS, and Bellman-Ford, which cover the spectrum of effective, faithful, and ineffective learned algorithms. Commonly, learning algorithmic reasoning is framed as induction over synthetic data, where a parameterized model is trained on inputs, traces, and outputs produced by an underlying ground truth algorithm. In contrast, we introduce a neural compilation method for GNNs, which sets network parameters analytically, bypassing training. Focusing on GNNs leverages their alignment with algorithmic reasoning, extensive algorithmic induction literature, and the novel application of neural compilation to GNNs. Overall, this paper aims to characterize expressability-trainability gaps - a fundamental shortcoming in learning algorithmic reasoning. We hypothesize that inductive learning is most effective for parallel algorithms contained within the computational class exttt{NC}.
Problem

Research questions and friction points this paper is trying to address.

Understand faithfulness of learned algorithms in neural networks
Investigate why neural networks fail to learn effective algorithms
Characterize expressability-trainability gaps in algorithmic reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural compilation encodes algorithms directly
Analytically sets GNN parameters without training
Compares compiled and learned network behaviors
🔎 Similar Papers
No similar papers found.