InductionBench: LLMs Fail in the Simplest Complexity Class

📅 2025-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the weak inductive reasoning capability of large language models (LLMs). We introduce InductionBench, the first benchmark specifically designed to evaluate inductive reasoning. Methodologically, we systematically incorporate subregular complexity classes from formal language theory—such as strictly local (SL) and strictly piecewise (SP) languages—into LLM evaluation for the first time; tasks are constructed using finite-state automata to ensure syntactic rigor, minimal sample size, and controllable difficulty. Our contributions are threefold: (1) we fill a critical assessment gap for scientific discovery—the ability to infer latent rules from finite observations; (2) we empirically demonstrate that state-of-the-art models (e.g., o1, o3) perform below chance on the most basic subregular tasks, revealing a fundamental deficit in their inductive reasoning; and (3) InductionBench establishes a new paradigm for diagnosing and advancing systematic reasoning in LLMs.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have shown remarkable improvements in reasoning and many existing benchmarks have been addressed by models such as o1 and o3 either fully or partially. However, a majority of these benchmarks emphasize deductive reasoning, including mathematical and coding tasks in which rules such as mathematical axioms or programming syntax are clearly defined, based on which LLMs can plan and apply these rules to arrive at a solution. In contrast, inductive reasoning, where one infers the underlying rules from observed data, remains less explored. Such inductive processes lie at the heart of scientific discovery, as they enable researchers to extract general principles from empirical observations. To assess whether LLMs possess this capacity, we introduce InductionBench, a new benchmark designed to evaluate the inductive reasoning ability of LLMs. Our experimental findings reveal that even the most advanced models available struggle to master the simplest complexity classes within the subregular hierarchy of functions, highlighting a notable deficiency in current LLMs' inductive reasoning capabilities. Coda and data are available https://github.com/Wenyueh/inductive_reasoning_benchmark.
Problem

Research questions and friction points this paper is trying to address.

Assessing inductive reasoning in LLMs
Introducing InductionBench for evaluation
Highlighting deficiencies in LLMs' inductive capabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces InductionBench for LLMs
Evaluates inductive reasoning ability
Highlights LLMs' deficiencies in induction
🔎 Similar Papers
No similar papers found.