GEPA: Reflective Prompt Evolution Can Outperform Reinforcement Learning

📅 2025-07-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low sampling efficiency of reinforcement learning methods in downstream adaptation of large language models (LLMs), this paper proposes GEPA—a natural-language-based reflective prompt optimization framework. GEPA diagnoses issues by analyzing system-level reasoning traces, generates improved prompts via natural language generation, and integrates genetic algorithms with Pareto frontier selection to enable multi-objective, few-shot prompt search. Its key innovation lies in the first coupling of linguistic reflection mechanisms with genetic-Pareto optimization, supporting runtime collaborative optimization across multiple LLMs. Experiments demonstrate that GEPA achieves an average 10% performance gain over GRPO across four tasks (up to +20%), with a 35× improvement in sampling efficiency; it also outperforms MIPROv2 by over 10% on two large-scale models.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly adapted to downstream tasks via reinforcement learning (RL) methods like Group Relative Policy Optimization (GRPO), which often require thousands of rollouts to learn new tasks. We argue that the interpretable nature of language can often provide a much richer learning medium for LLMs, compared with policy gradients derived from sparse, scalar rewards. To test this, we introduce GEPA (Genetic-Pareto), a prompt optimizer that thoroughly incorporates natural language reflection to learn high-level rules from trial and error. Given any AI system containing one or more LLM prompts, GEPA samples system-level trajectories (e.g., reasoning, tool calls, and tool outputs) and reflects on them in natural language to diagnose problems, propose and test prompt updates, and combine complementary lessons from the Pareto frontier of its own attempts. As a result of GEPA's design, it can often turn even just a few rollouts into a large quality gain. Across four tasks, GEPA outperforms GRPO by 10% on average and by up to 20%, while using up to 35x fewer rollouts. GEPA also outperforms the leading prompt optimizer, MIPROv2, by over 10% across two LLMs, and demonstrates promising results as an inference-time search strategy for code optimization.
Problem

Research questions and friction points this paper is trying to address.

Optimizing LLM prompts using natural language reflection
Reducing rollout requirements compared to reinforcement learning
Improving performance over existing prompt optimizers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses natural language reflection for prompt optimization
Combines lessons from Pareto frontier attempts
Achieves high quality gains with few rollouts
🔎 Similar Papers
No similar papers found.
Lakshya A Agrawal
Lakshya A Agrawal
University of California, Berkeley
Large Language ModelsAI4CodeArtificial IntelligenceProgramming LanguagesSoftware Engineering
Shangyin Tan
Shangyin Tan
PhD Student, UC Berkeley
Program AnalysisProgramming LanguagesCompilers
Dilara Soylu
Dilara Soylu
Department of Computer Science, Stanford University
Natural Language ProcessingMachine Learning
Noah Ziems
Noah Ziems
Visiting Researcher @ MIT CSAIL, PhD Student @ Notre Dame
Machine LearningNatural Language Processing
Rishi Khare
Rishi Khare
Student of Computer Science, University of California, Berkeley
AI SystemsLarge Language ModelsNatural Language ProcessingReinforcement Learning
Krista Opsahl-Ong
Krista Opsahl-Ong
Stanford University
Machine LearningArtificial Intelligence
A
Arnav Singhvi
Stanford University
H
Herumb Shandilya
Stanford University
M
Michael J Ryan
Stanford University
M
Meng Jiang
Notre Dame
Christopher Potts
Christopher Potts
Professor of Linguistics and, by courtesy, of Computer Science
LinguisticsComputational LinguisticsSemanticsPragmaticsComputational Pragmatics
Koushik Sen
Koushik Sen
Professor of Computer Science, University of California, Berkeley
Computer ScienceTestingDebuggingProgram AnalysisConcurrency
A
Alexandros G. Dimakis
BespokeLabs.ai
Ion Stoica
Ion Stoica
Professor of Computer Science, UC Berkeley
Cloud ComputingNetworkingDistributed SystemsBig Data
D
Dan Klein
UC Berkeley
Matei Zaharia
Matei Zaharia
UC Berkeley and Databricks
Distributed SystemsMachine LearningDatabasesSecurity
Omar Khattab
Omar Khattab
MIT EECS & CSAIL
Natural Language ProcessingInformation RetrievalML SystemsAI Software