🤖 AI Summary
This work investigates the logical extrapolation capabilities of RNNs and Implicit Neural Networks (INNs) on maze-solving tasks—specifically, whether models trained on simple mazes can learn scalable iterative algorithms that generalize to larger or structurally more complex mazes. Using fixed-point analysis, dynamical system modeling, and trajectory visualization, we systematically evaluate generalization across multiple axes of difficulty, including maze size and topological complexity. We find that INNs exhibit stable extrapolation with respect to grid scale but fail completely under increased topological complexity; RNNs often solve mazes correctly yet frequently converge to non-convergent limit cycles. Crucially, we identify an “axial fragility” in logical extrapolation: robust generalization occurs only along specific dimensions (e.g., spatial scale), not others. Moreover, we establish a strong causal link between intrinsic network dynamics—convergence versus oscillatory behavior—and extrapolation robustness, challenging prevailing simplified accounts of extrapolation mechanisms.
📝 Abstract
Recent work has suggested that certain neural network architectures-particularly recurrent neural networks (RNNs) and implicit neural networks (INNs) are capable of logical extrapolation. That is, one may train such a network on easy instances of a specific task and then apply it successfully to more difficult instances of the same task. In this paper, we revisit this idea and show that (i) The capacity for extrapolation is less robust than previously suggested. Specifically, in the context of a maze-solving task, we show that while INNs (and some RNNs) are capable of generalizing to larger maze instances, they fail to generalize along axes of difficulty other than maze size. (ii) Models that are explicitly trained to converge to a fixed point (e.g. the INN we test) are likely to do so when extrapolating, while models that are not (e.g. the RNN we test) may exhibit more exotic limiting behaviour such as limit cycles, even when they correctly solve the problem. Our results suggest that (i) further study into why such networks extrapolate easily along certain axes of difficulty yet struggle with others is necessary, and (ii) analyzing the dynamics of extrapolation may yield insights into designing more efficient and interpretable logical extrapolators.