GP and LLMs for Program Synthesis: No Clear Winners

📅 2025-08-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the complementary capabilities of genetic programming (PushGP) and large language models (GPT-4o) in program synthesis, addressing limitations of homogeneous approaches. Method: We conduct controlled experiments on the PSB2 benchmark, systematically varying prompt strategies—pure input-output examples, natural-language descriptions, and their fusion—while modulating sample size to assess robustness. Contribution/Results: (1) Hybrid prompting enables PushGP and GPT-4o to jointly solve 23 out of 25 tasks, outperforming either method in isolation. (2) On 12 tasks, the methods exhibit strict complementarity—only one succeeds—revealing fundamental differences in search mechanisms: semantic-driven evolutionary search (PushGP) versus pattern-based statistical induction (GPT-4o). (3) Performance of both methods degrades markedly under reduced sample size with pure-data prompts, whereas hybrid prompting maintains robustness. These findings empirically validate strong complementarity between evolutionary GP and LLMs in program synthesis, establishing a principled foundation and design paradigm for heterogeneous method integration.

Technology Category

Application Category

📝 Abstract
Genetic programming (GP) and large language models (LLMs) differ in how program specifications are provided: GP uses input-output examples, and LLMs use text descriptions. In this work, we compared the ability of PushGP and GPT-4o to synthesize computer programs for tasks from the PSB2 benchmark suite. We used three prompt variants with GPT-4o: input-output examples (data-only), textual description of the task (text-only), and a combination of both textual descriptions and input-output examples (data-text). Additionally, we varied the number of input-output examples available for building programs. For each synthesizer and task combination, we compared success rates across all program synthesizers, as well as the similarity between successful GPT-4o synthesized programs. We found that the combination of PushGP and GPT-4o with data-text prompting led to the greatest number of tasks solved (23 of the 25 tasks), even though several tasks were solved exclusively by only one of the two synthesizers. We also observed that PushGP and GPT-4o with data-only prompting solved fewer tasks with the decrease in the training set size, while the remaining synthesizers saw no decrease. We also detected significant differences in similarity between the successful programs synthesized for GPT-4o with text-only and data-only prompting. With there being no dominant program synthesizer, this work highlights the importance of different optimization techniques used by PushGP and LLMs to synthesize programs.
Problem

Research questions and friction points this paper is trying to address.

Compare GP and LLMs for program synthesis effectiveness
Evaluate impact of prompt types on GPT-4o performance
Analyze program similarity across different synthesis approaches
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combined PushGP and GPT-4o for program synthesis
Used data-text prompting for optimal results
Compared success rates across different prompt variants
🔎 Similar Papers
No similar papers found.
J
Jose Guadalupe Hernandez
Cedars-Sinai Medical Center, Los Angeles, CA, USA
A
Anil Kumar Saini
Cedars-Sinai Medical Center, Los Angeles, CA, USA
G
Gabriel Ketron
Cedars-Sinai Medical Center, Los Angeles, CA, USA
Jason H. Moore
Jason H. Moore
Chair, Department of Computational Biomedicine, Cedars-Sinai Medical Center, Los Angeles, CA
Artificial IntelligenceMachine LearningBiomedical InformaticsPrecision MedicineTranslational Bioinformatics