🤖 AI Summary
This study addresses the significant gap between human and neural network performance in visual abstract relational reasoning—particularly geometric rule generalization. We introduce a representation-geometry-based analytical and optimization paradigm. First, we construct SimplifiedRPM, a streamlined Raven’s Progressive Matrices benchmark, alongside controlled human behavioral experiments. Second, via inter-layer representational geometry analysis, we identify a generalization bottleneck: unseen rule representations collapse into the trained subspace, impairing out-of-distribution generalization. Third, we propose SNRloss—a signal-to-noise-ratio-driven objective—that explicitly enhances representational discriminability and structural consistency. Evaluated on the Scattering Compositional Learner (SCL), our approach demonstrates that geometric metrics quantitatively predict cross-rule generalization performance; SNRloss substantially improves generalization accuracy; and SCL’s reasoning behavior most closely aligns with human subjects’ patterns. Collectively, this work establishes representational geometry as a principled lens for diagnosing and enhancing abstract reasoning capabilities in deep networks.
📝 Abstract
Humans and other animals readily generalize abstract relations, such as recognizing constant in shape or color, whereas neural networks struggle. To investigate how neural networks generalize abstract relations, we introduce SimplifiedRPM, a novel benchmark for systematic evaluation. In parallel, we conduct human experiments to benchmark relational difficulty, enabling direct model-human comparisons. Testing four architectures--ResNet-50, Vision Transformer, Wild Relation Network, and Scattering Compositional Learner (SCL)--we find that SCL best aligns with human behavior and generalizes best. Building on a geometric theory of neural representations, we show representational geometries that predict generalization. Layer-wise analysis reveals distinct relational reasoning strategies across models and suggests a trade-off where unseen rule representations compress into training-shaped subspaces. Guided by our geometric perspective, we propose and evaluate SNRloss, a novel objective balancing representation geometry. Our findings offer geometric insights into how neural networks generalize abstract relations, paving the way for more human-like visual reasoning in AI.