PBEBench: A Multi-Step Programming by Examples Reasoning Benchmark inspired by Historical Linguistics

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the generalization capabilities of large language models (LLMs) on multi-step inductive reasoning tasks inspired by historical linguistics. To address the lack of genuine abstraction and controllable difficulty in existing benchmarks, we introduce the first historical linguistics–driven program-by-example (PBE) evaluation benchmark. Our method employs an automated, dynamic generation pipeline that integrates phonological sound-change rules with formal grammar constraints, enabling scalable, contamination-resistant construction of test instances with tunable difficulty; we generate nearly 1,000 high-difficulty examples. Experimental results reveal that the state-of-the-art model Claude-3.7-Sonnet achieves only a 54% success rate, exposing fundamental limitations of LLMs in structured, diachronic inductive reasoning. Our core contributions are: (i) the first PBE paradigm guided by historical linguistics principles, and (ii) a scalable, interpretable generation framework supporting rigorous evaluation of evolutionary abstraction.

Technology Category

Application Category

📝 Abstract
Recently, long chain of thought (LCoT), Large Language Models (LLMs), have taken the machine learning world by storm with their breathtaking reasoning capabilities. However, are the abstract reasoning abilities of these models general enough for problems of practical importance? Unlike past work, which has focused mainly on math, coding, and data wrangling, we focus on a historical linguistics-inspired inductive reasoning problem, formulated as Programming by Examples. We develop a fully automated pipeline for dynamically generating a benchmark for this task with controllable difficulty in order to tackle scalability and contamination issues to which many reasoning benchmarks are subject. Using our pipeline, we generate a test set with nearly 1k instances that is challenging for all state-of-the-art reasoning LLMs, with the best model (Claude-3.7-Sonnet) achieving a mere 54% pass rate, demonstrating that LCoT LLMs still struggle with a class or reasoning that is ubiquitous in historical linguistics as well as many other domains.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' reasoning on historical linguistics-inspired problems
Generating scalable benchmarks for Programming by Examples tasks
Assessing LCoT LLMs' limitations in multi-step inductive reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated pipeline for dynamic benchmark generation
Controllable difficulty in benchmark instances
Historical linguistics-inspired inductive reasoning problem
🔎 Similar Papers
No similar papers found.
Atharva Naik
Atharva Naik
PhD Student, Carnegie Mellon University
LLM4CodeLLM ReasoningAlignment
D
Darsh Agrawal
Carnegie Mellon University
M
M. Kapadnis
Carnegie Mellon University
Y
Yuwei An
Carnegie Mellon University
Y
Yash Mathur
C
Carolyn Rose
Carnegie Mellon University
David Mortensen
David Mortensen
Carnegie Mellon University
NLPlinguisticsphonologymorphology