๐ค AI Summary
This work addresses the insufficient systematic generalization of large language models (LLMs) and large reasoning models (LRMs) in qualitative spatial/temporal relational reasoning. To this end, we introduce the first controllable-difficulty benchmark explicitly designed to evaluate systematic generalization through relational composition. Methodologically, we construct a structured synthetic task suite and employ reinforcement learning fine-tuning combined with chain-of-thought prompting to rigorously control problem complexity and quantitatively measure out-of-distribution generalization. Our key contributions are threefold: (1) extending systematic generalization evaluation beyond mathematical and programming domains into qualitative spatiotemporal reasoning; (2) proposing a benchmark framework enabling precise difficulty calibration and empirical measurement of generalization boundaries; and (3) empirically demonstrating that state-of-the-art LLMs and LRMs perform near chance level on these tasksโrevealing a fundamental limitation in their capacity for informal relational reasoning.
๐ Abstract
Large Language Models (LLMs) have been found to struggle with systematic reasoning. Even on tasks where they appear to perform well, their performance often depends on shortcuts, rather than on genuine reasoning abilities, leading them to collapse on out-of-distribution examples. Post-training strategies based on reinforcement learning and chain-of-thought prompting have recently been hailed as a step change. However, little is still known about the potential of the resulting ``Large Reasoning Models'' (LRMs) beyond problem solving in mathematics and programming, where finding genuine out-of-distribution problems can be difficult. In this paper, we focus on tasks that require systematic reasoning about relational compositions, especially for qualitative spatial and temporal reasoning. These tasks allow us to control the difficulty of problem instances, and measure in a precise way to what extent models can generalise. We find that that the considered LLMs and LRMs overall perform poorly overall, albeit better than random chance.