From Priors to Predictions: Explaining and Visualizing Human Reasoning in a Graph Neural Network Framework

📅 2025-12-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Human abstract reasoning from minimal samples relies on informal inductive biases whose formal characterization remains elusive. Method: We propose the first explicit, differentiable graph-structured prior encoding human inductive biases, instantiated within a graph neural network (GNN)-based prior-driven reasoning framework. The framework learns task-specific structural priors via graph topology search and employs key subgraph visualization and computational graph attribution to uncover individual problem-solving differences and error origins. Contribution/Results: Systematic ablation studies and evaluations on the Abstraction and Reasoning Corpus (ARC) demonstrate that our model not only replicates human behavioral patterns but also localizes human-like errors attributable to flawed priors. It achieves substantial improvements in out-of-distribution generalization, interpretability, and human–model alignment—bridging cognitive modeling and deep learning through structured, learnable inductive bias.

Technology Category

Application Category

📝 Abstract
Humans excel at solving novel reasoning problems from minimal exposure, guided by inductive biases, assumptions about which entities and relationships matter. Yet the computational form of these biases and their neural implementation remain poorly understood. We introduce a framework that combines Graph Theory and Graph Neural Networks (GNNs) to formalize inductive biases as explicit, manipulable priors over structure and abstraction. Using a human behavioral dataset adapted from the Abstraction and Reasoning Corpus (ARC), we show that differences in graph-based priors can explain individual differences in human solutions. Our method includes an optimization pipeline that searches over graph configurations, varying edge connectivity and node abstraction, and a visualization approach that identifies the computational graph, the subset of nodes and edges most critical to a model's prediction. Systematic ablation reveals how generalization depends on specific prior structures and internal processing, exposing why human like errors emerge from incorrect or incomplete priors. This work provides a principled, interpretable framework for modeling the representational assumptions and computational dynamics underlying generalization, offering new insights into human reasoning and a foundation for more human aligned AI systems.
Problem

Research questions and friction points this paper is trying to address.

Formalizing inductive biases as explicit priors in graph neural networks
Explaining individual differences in human reasoning using graph-based priors
Visualizing critical computational graph structures for interpretable generalization analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Formalizes inductive biases as explicit graph-based priors
Optimizes graph configurations for connectivity and abstraction
Visualizes critical computational graphs to interpret predictions
Quan Do
Quan Do
Boston University
Neuroscience
C
Caroline Ahn
Graduate Program for Neuroscience, Boston University , Boston, MA
L
Leah Bakst
Department of Psychological and Brain Sciences, Boston University , Boston, MA
M
Michael Pascale
Department of Psychological and Brain Sciences, Boston University , Boston, MA
Joseph T. McGuire
Joseph T. McGuire
Boston University
PsychologyCognitive NeuroscienceDecision making
Chantal E. Stern
Chantal E. Stern
Professor, Center for Memory and Brain, Dept. of Psychological and Brain Sciences, Boston University
Cognitive NeuroscienceMemoryLearningNeuroimaging
M
Michael E. Hasselmo
Graduate Program for Neuroscience, Boston University , Boston, MA; Department of Psychological and Brain Sciences, Boston University , Boston, MA; Center for Systems Neuroscience, Boston University , Boston, MA; Cognitive Neuroimaging Center , Boston University , Boston, MA