š¤ AI Summary
This work investigates the hypothesis-searchābased reasoning mechanisms employed by large language models (LLMs) in few-shot rule induction tasks, systematically identifying error sources and benchmarking against direct program generation.
Method: We propose an LLM-driven hypothesis generationāverification framework, integrating fine-grained error attribution and human performance baselines for quantitative evaluation.
Contribution/Results: Our framework significantly outperforms direct generation, approaching human-level induction accuracy. Analysis reveals that the primary bottleneck lies in hypothesis generationānot verificationāmanifesting as semantic drift and insufficient rule generalization. Consequently, we identify key pathways to enhance program induction: improving LLMsā symbolic abstraction capabilities and structured hypothesis construction. This study provides empirical grounding and methodological insights for designing interpretable, human-like rule induction systems.
š Abstract
Inductive reasoning enables humans to infer abstract rules from limited examples and apply them to novel situations. In this work, we compare an LLM-based hypothesis search framework with direct program generation approaches on few-shot rule induction tasks. Our findings show that hypothesis search achieves performance comparable to humans, while direct program generation falls notably behind. An error analysis reveals key bottlenecks in hypothesis generation and suggests directions for advancing program induction methods. Overall, this paper underscores the potential of LLM-based hypothesis search for modeling inductive reasoning and the challenges in building more efficient systems.