Synthesizing Programmatic Reinforcement Learning Policies with Large Language Model Guided Search

📅 2024-05-26
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Programmatic Reinforcement Learning (PRL) offers strong interpretability and generalization but suffers from extremely low sample efficiency—requiring millions of environment interactions. To address this, we propose LLM-Guided Search (LLM-GS), the first framework enabling end-to-end generation of executable policy programs directly from natural language. Our method introduces: (1) a Pythonic domain-specific language (DSL) to eliminate LLM-induced syntax errors; (2) Scheduled Hill Climbing—a hybrid heuristic search and local optimization algorithm that significantly improves both synthesis efficiency and program quality; and (3) zero-shot usability for non-programmers via natural-language policy specification. On the Karel benchmark, LLM-GS reduces sample complexity by several orders of magnitude over prior state-of-the-art. Ablation studies confirm the necessity of each component. Moreover, LLM-GS successfully generalizes to two unseen tasks, achieving high end-to-end success rates in translating natural language to executable policies.

Technology Category

Application Category

📝 Abstract
Programmatic reinforcement learning (PRL) has been explored for representing policies through programs as a means to achieve interpretability and generalization. Despite promising outcomes, current state-of-the-art PRL methods are hindered by sample inefficiency, necessitating tens of millions of program-environment interactions. To tackle this challenge, we introduce a novel LLM-guided search framework (LLM-GS). Our key insight is to leverage the programming expertise and common sense reasoning of LLMs to enhance the efficiency of assumption-free, random-guessing search methods. We address the challenge of LLMs' inability to generate precise and grammatically correct programs in domain-specific languages (DSLs) by proposing a Pythonic-DSL strategy - an LLM is instructed to initially generate Python codes and then convert them into DSL programs. To further optimize the LLM-generated programs, we develop a search algorithm named Scheduled Hill Climbing, designed to efficiently explore the programmatic search space to improve the programs consistently. Experimental results in the Karel domain demonstrate our LLM-GS framework's superior effectiveness and efficiency. Extensive ablation studies further verify the critical role of our Pythonic-DSL strategy and Scheduled Hill Climbing algorithm. Moreover, we conduct experiments with two novel tasks, showing that LLM-GS enables users without programming skills and knowledge of the domain or DSL to describe the tasks in natural language to obtain performant programs.
Problem

Research questions and friction points this paper is trying to address.

Improves sample efficiency in programmatic reinforcement learning.
Enhances program generation using LLM-guided search and Pythonic-DSL.
Enables non-programmers to create performant programs via natural language.
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-guided search enhances programmatic reinforcement learning efficiency
Pythonic-DSL strategy converts Python to domain-specific languages
Scheduled Hill Climbing optimizes LLM-generated programs effectively
🔎 Similar Papers
No similar papers found.
M
Max Liu
National Taiwan University
C
Chan-Hung Yu
National Taiwan University
W
Wei-Hsu Lee
National Taiwan University
C
Cheng-Wei Hung
National Taiwan University
Yen-Chun Chen
Yen-Chun Chen
Researcher, Microsoft
Natural Language ProcessingComputer VisionMultimodal AI
Shao-Hua Sun
Shao-Hua Sun
Assistant Professor at National Taiwan University
Machine LearningRobot LearningReinforcement LearningProgram Synthesis