PBE Meets LLM: When Few Examples Aren't Few-Shot Enough

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the capabilities and limitations of large language models (LLMs) in program-by-example (PBE) for tabular data transformation. Addressing the poor generalization and rigid input constraints of conventional PBE systems, we propose a hybrid solving framework: it first invokes an exact but narrowly scoped traditional PBE solver; upon failure, it falls back to an LLM augmented with domain-informed structured prompting—including multi-attempt generation and semantic constraint injection. We evaluate diverse state-of-the-art LLMs and prompting strategies across benchmark tasks. Results show that LLMs support more flexible input formats and achieve higher overall accuracy than traditional solvers, yet remain challenged by semantically ambiguous examples. The hybrid approach significantly improves task success rates while preserving accuracy and robustness. Our method thus establishes a scalable, practical extension to the PBE paradigm, bridging symbolic precision with neural flexibility.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) can generate code from natural language descriptions. Their performance is typically evaluated using programming benchmarks that simulate real-world tasks. These benchmarks provide specifications in the form of docstrings, function signatures, or bug reports. The model then generates a program, which is tested against predefined test cases. In contrast, Programming by Example (PBE) uses input-output examples as the specification. Traditional PBE systems rely on search-based methods over restricted transformation spaces. They are usually designed for narrow domains and fixed input formats. It remains unclear how well LLMs perform on PBE tasks. In this work, we evaluate LLMs on PBE tasks involving tabular data transformations. We prompt models to generate functions that convert an input table to an output table. We test the generated functions on unseen inputs to measure accuracy. Our study includes multiple LLMs and evaluates different prompting strategies, such as one-shot vs. multi-try. We also compare performance with and without PBE-specific knowledge. Finally, we propose a hybrid method that calls a traditional PBE solver first, and then falls back to LLMs if necessary. Our results show that LLMs support more diverse input formats and achieve higher accuracy than conventional methods. However, they struggle with tasks that contain ambiguity. The hybrid approach improves overall success by combining the strengths of both approaches.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs on Programming by Example tasks
Comparing LLM performance with traditional PBE methods
Proposing hybrid approach combining LLMs and PBE solvers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates LLMs on tabular data PBE tasks
Proposes hybrid PBE-LLM method for accuracy
Tests diverse prompting strategies on LLMs
🔎 Similar Papers
No similar papers found.