🤖 AI Summary
Existing evaluations assess deductive, inductive, or abductive reasoning in isolation, failing to capture large language models’ (LLMs) integrated reasoning capabilities in unfamiliar environments.
Method: We propose a novel black-box interaction paradigm wherein models infer hidden functions solely from limited rounds of input-output observations—unifying the assessment of all three reasoning types through collaborative, iterative hypothesis generation, testing, and refinement. To support this, we introduce Oracle, the first benchmark comprising six task categories and 96 diverse black-box functions, enabling end-to-end evaluation of high-level planning and adaptive exploration.
Contribution/Results: We evaluate 19 state-of-the-art LLMs; while top models (e.g., o3) achieve >70% accuracy on simple tasks, performance drops below 40% on challenging ones—revealing systemic weaknesses in dynamic hypothesis formation, validation, and exploration strategy. Oracle provides a scalable, reproducible framework for rigorous, holistic reasoning evaluation.
📝 Abstract
Existing tasks fall short in evaluating reasoning ability of Large Language Models (LLMs) in an interactive, unknown environment. This deficiency leads to the isolated assessment of deductive, inductive, and abductive reasoning, neglecting the integrated reasoning process that is indispensable for humans discovery of real world. We introduce a novel evaluation paradigm, extit{black-box interaction}, to tackle this challenge. A black-box is defined by a hidden function that maps a specific set of inputs to outputs. LLMs are required to unravel the hidden function behind the black-box by interacting with it in given exploration turns, and reasoning over observed input-output pairs. Leveraging this idea, we build the extsc{Oracle} benchmark which comprises 6 types of black-box task and 96 black-boxes. 19 modern LLMs are benchmarked. o3 ranks first in 5 of the 6 tasks, achieving over 70% accuracy on most easy black-boxes. But it still struggles with some hard black-box tasks, where its average performance drops below 40%. Further analysis indicates a universal difficulty among LLMs: They lack the high-level planning capability to develop efficient and adaptive exploration strategies for hypothesis refinement.