๐ค AI Summary
This study investigates the cognitive strategies and behavioral patterns underlying human abstract rule reasoning. By constructing CogARCโa human-aligned visual reasoning datasetโwe recorded high-temporal-resolution behavioral trajectories from 260 participants across 75 tasks, capturing interactions such as example inspection, edit sequences, and multi-round submissions. This work is the first to systematically reveal that human performance on ARC-like tasks exhibits both strategic diversity and convergence. Quantitative analysis shows average participant accuracy of 80%โ90%; challenging problems elicit longer deliberation times and greater strategy divergence, yet erroneous responses remain highly convergent, reflecting shared cognitive constraints. These findings provide fine-grained behavioral evidence and a new benchmark for understanding the mechanisms of human abstract reasoning.
๐ Abstract
Humans exhibit remarkable flexibility in abstract reasoning, and can rapidly learn and apply rules from sparse examples. To investigate the cognitive strategies underlying this ability, we introduce the Cognitive Abstraction and Reasoning Corpus (CogARC), a diverse human-adapted subset of the Abstraction and Reasoning Corpus (ARC) which was originally developed to benchmark abstract reasoning in artificial intelligence. Across two experiments, CogARC was administered to a total of 260 human participants who freely generated solutions to 75 abstract visual reasoning problems. Success required inferring input-output rules from a small number of examples to transform the test input into one correct test output. Participants' behavior was recorded at high temporal resolution, including example viewing, edit sequences, and multi-attempt submissions. Participants were generally successful (mean accuracy ~90% for experiment 1 (n=40), ~80% for experiment 2 (n=220) across problems), but performance varied widely across problems and participants. Harder problems elicited longer deliberation times and greater divergence in solution strategies. Over the course of the task, participants initiated responses more quickly but showed a slight decline in accuracy, suggesting increased familiarity with the task structure rather than improved rule-learning ability. Importantly, even incorrect solutions were often highly convergent, even when the problem-solving trajectories differed in length and smoothness. Some trajectories progressed directly and efficiently toward a stable outcome, whereas others involved extended exploration or partial restarts before converging. Together, these findings highlight CogARC as a rich behavioral environment for studying human abstract reasoning, providing insight into how people generalize, misgeneralize, and adapt their strategies under uncertainty.