🤖 AI Summary
Programmatic Reinforcement Learning (PRL) offers strong interpretability and generalization but suffers from extremely low sample efficiency—requiring millions of environment interactions. To address this, we propose LLM-Guided Search (LLM-GS), the first framework enabling end-to-end generation of executable policy programs directly from natural language. Our method introduces: (1) a Pythonic domain-specific language (DSL) to eliminate LLM-induced syntax errors; (2) Scheduled Hill Climbing—a hybrid heuristic search and local optimization algorithm that significantly improves both synthesis efficiency and program quality; and (3) zero-shot usability for non-programmers via natural-language policy specification. On the Karel benchmark, LLM-GS reduces sample complexity by several orders of magnitude over prior state-of-the-art. Ablation studies confirm the necessity of each component. Moreover, LLM-GS successfully generalizes to two unseen tasks, achieving high end-to-end success rates in translating natural language to executable policies.
📝 Abstract
Programmatic reinforcement learning (PRL) has been explored for representing policies through programs as a means to achieve interpretability and generalization. Despite promising outcomes, current state-of-the-art PRL methods are hindered by sample inefficiency, necessitating tens of millions of program-environment interactions. To tackle this challenge, we introduce a novel LLM-guided search framework (LLM-GS). Our key insight is to leverage the programming expertise and common sense reasoning of LLMs to enhance the efficiency of assumption-free, random-guessing search methods. We address the challenge of LLMs' inability to generate precise and grammatically correct programs in domain-specific languages (DSLs) by proposing a Pythonic-DSL strategy - an LLM is instructed to initially generate Python codes and then convert them into DSL programs. To further optimize the LLM-generated programs, we develop a search algorithm named Scheduled Hill Climbing, designed to efficiently explore the programmatic search space to improve the programs consistently. Experimental results in the Karel domain demonstrate our LLM-GS framework's superior effectiveness and efficiency. Extensive ablation studies further verify the critical role of our Pythonic-DSL strategy and Scheduled Hill Climbing algorithm. Moreover, we conduct experiments with two novel tasks, showing that LLM-GS enables users without programming skills and knowledge of the domain or DSL to describe the tasks in natural language to obtain performant programs.