Benchmarking Systematic Relational Reasoning with Large Language and Reasoning Models

๐Ÿ“… 2025-03-30
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the insufficient systematic generalization of large language models (LLMs) and large reasoning models (LRMs) in qualitative spatial/temporal relational reasoning. To this end, we introduce the first controllable-difficulty benchmark explicitly designed to evaluate systematic generalization through relational composition. Methodologically, we construct a structured synthetic task suite and employ reinforcement learning fine-tuning combined with chain-of-thought prompting to rigorously control problem complexity and quantitatively measure out-of-distribution generalization. Our key contributions are threefold: (1) extending systematic generalization evaluation beyond mathematical and programming domains into qualitative spatiotemporal reasoning; (2) proposing a benchmark framework enabling precise difficulty calibration and empirical measurement of generalization boundaries; and (3) empirically demonstrating that state-of-the-art LLMs and LRMs perform near chance level on these tasksโ€”revealing a fundamental limitation in their capacity for informal relational reasoning.

Technology Category

Application Category

๐Ÿ“ Abstract
Large Language Models (LLMs) have been found to struggle with systematic reasoning. Even on tasks where they appear to perform well, their performance often depends on shortcuts, rather than on genuine reasoning abilities, leading them to collapse on out-of-distribution examples. Post-training strategies based on reinforcement learning and chain-of-thought prompting have recently been hailed as a step change. However, little is still known about the potential of the resulting ``Large Reasoning Models'' (LRMs) beyond problem solving in mathematics and programming, where finding genuine out-of-distribution problems can be difficult. In this paper, we focus on tasks that require systematic reasoning about relational compositions, especially for qualitative spatial and temporal reasoning. These tasks allow us to control the difficulty of problem instances, and measure in a precise way to what extent models can generalise. We find that that the considered LLMs and LRMs overall perform poorly overall, albeit better than random chance.
Problem

Research questions and friction points this paper is trying to address.

Assessing LLMs' systematic relational reasoning abilities
Evaluating LRMs beyond math and programming tasks
Measuring generalization in qualitative spatial-temporal reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Focus on systematic relational reasoning tasks
Control difficulty of problem instances precisely
Measure model generalization capabilities accurately
๐Ÿ”Ž Similar Papers
No similar papers found.
I
Irtaza Khalid
School of Computer Science and Informatics, Cardiff University, United Kingdom
A
Amir Masoud Nourollah
School of Computer Science and Informatics, Cardiff University, United Kingdom
Steven Schockaert
Steven Schockaert
Cardiff University
artificial intelligenceknowledge representationnatural language processingcommonsense reasoning