Unraveling the geometry of visual relational reasoning

📅 2025-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the significant gap between human and neural network performance in visual abstract relational reasoning—particularly geometric rule generalization. We introduce a representation-geometry-based analytical and optimization paradigm. First, we construct SimplifiedRPM, a streamlined Raven’s Progressive Matrices benchmark, alongside controlled human behavioral experiments. Second, via inter-layer representational geometry analysis, we identify a generalization bottleneck: unseen rule representations collapse into the trained subspace, impairing out-of-distribution generalization. Third, we propose SNRloss—a signal-to-noise-ratio-driven objective—that explicitly enhances representational discriminability and structural consistency. Evaluated on the Scattering Compositional Learner (SCL), our approach demonstrates that geometric metrics quantitatively predict cross-rule generalization performance; SNRloss substantially improves generalization accuracy; and SCL’s reasoning behavior most closely aligns with human subjects’ patterns. Collectively, this work establishes representational geometry as a principled lens for diagnosing and enhancing abstract reasoning capabilities in deep networks.

Technology Category

Application Category

📝 Abstract
Humans and other animals readily generalize abstract relations, such as recognizing constant in shape or color, whereas neural networks struggle. To investigate how neural networks generalize abstract relations, we introduce SimplifiedRPM, a novel benchmark for systematic evaluation. In parallel, we conduct human experiments to benchmark relational difficulty, enabling direct model-human comparisons. Testing four architectures--ResNet-50, Vision Transformer, Wild Relation Network, and Scattering Compositional Learner (SCL)--we find that SCL best aligns with human behavior and generalizes best. Building on a geometric theory of neural representations, we show representational geometries that predict generalization. Layer-wise analysis reveals distinct relational reasoning strategies across models and suggests a trade-off where unseen rule representations compress into training-shaped subspaces. Guided by our geometric perspective, we propose and evaluate SNRloss, a novel objective balancing representation geometry. Our findings offer geometric insights into how neural networks generalize abstract relations, paving the way for more human-like visual reasoning in AI.
Problem

Research questions and friction points this paper is trying to address.

Generalize abstract relations in neural networks
Introduce SimplifiedRPM for systematic evaluation
Propose SNRloss to balance representation geometry
Innovation

Methods, ideas, or system contributions that make the work stand out.

SimplifiedRPM benchmark for evaluation
SCL architecture aligns with human behavior
SNRloss balances representation geometry
🔎 Similar Papers
No similar papers found.
J
Jiaqi Shang
Program in Neuroscience, Harvard Medical School, Boston, Massachusetts & 02115, United States
Gabriel Kreiman
Gabriel Kreiman
Professor, Harvard Medical School and Children's Hospital
Artificial Intelligence. Computational BiologyComputational Neuroscience.
H
H. Sompolinsky
Center for Brain Science, Harvard University, Cambridge, Massachusetts & 02138, United States; Edmond and Lily Safra Center for Brain Sciences, Hebrew University, Jerusalem & 9190401, Israel