ImagineBench: Evaluating Reinforcement Learning with Large Language Model Rollouts

📅 2025-05-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Reinforcement learning (RL) suffers from high sample complexity due to its reliance on extensive real-world interactions, hindering practical deployment. While recent work leverages large language models (LLMs) to generate synthetic “imagined rollouts” for offline RL, no standardized benchmark exists to evaluate such LLM-synthesized trajectories. This paper introduces the first offline RL benchmark dedicated to LLM-generated virtual rollouts. We propose a unified evaluation framework that integrates both real and LLM-synthesized trajectories, supporting natural-language-instructed policy learning across diverse simulation domains—including locomotion, manipulation, and navigation. The benchmark features multimodal task instructions, cross-domain environments, and a difficulty-stratified evaluation protocol. Empirical results reveal a substantial performance gap: state-of-the-art offline RL algorithms achieve only 35.44% success rate on hard tasks using LLM rollouts, versus 64.37% with real data—highlighting a critical bottleneck in algorithmic adaptation to synthetic data.

Technology Category

Application Category

📝 Abstract
A central challenge in reinforcement learning (RL) is its dependence on extensive real-world interaction data to learn task-specific policies. While recent work demonstrates that large language models (LLMs) can mitigate this limitation by generating synthetic experience (noted as imaginary rollouts) for mastering novel tasks, progress in this emerging field is hindered due to the lack of a standard benchmark. To bridge this gap, we introduce ImagineBench, the first comprehensive benchmark for evaluating offline RL algorithms that leverage both real rollouts and LLM-imaginary rollouts. The key features of ImagineBench include: (1) datasets comprising environment-collected and LLM-imaginary rollouts; (2) diverse domains of environments covering locomotion, robotic manipulation, and navigation tasks; and (3) natural language task instructions with varying complexity levels to facilitate language-conditioned policy learning. Through systematic evaluation of state-of-the-art offline RL algorithms, we observe that simply applying existing offline RL algorithms leads to suboptimal performance on unseen tasks, achieving 35.44% success rate in hard tasks in contrast to 64.37% of method training on real rollouts for hard tasks. This result highlights the need for algorithm advancements to better leverage LLM-imaginary rollouts. Additionally, we identify key opportunities for future research: including better utilization of imaginary rollouts, fast online adaptation and continual learning, and extension to multi-modal tasks. Our code is publicly available at https://github.com/LAMDA-RL/ImagineBench.
Problem

Research questions and friction points this paper is trying to address.

Evaluating RL algorithms using LLM-generated synthetic experience data
Lack of standard benchmark for RL with imaginary rollouts
Improving offline RL performance on unseen tasks via LLM rollouts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces ImagineBench for offline RL evaluation
Combines real and LLM-generated synthetic rollouts
Supports diverse domains and language instructions
🔎 Similar Papers
No similar papers found.
Jing-Cheng Pang
Jing-Cheng Pang
Researcher, Huawei; Nanjing University
reinforcement learninglanguage-conditioned RLlarge language model
Kaiyuan Li
Kaiyuan Li
Beijing University Of Posts and Telecommunications
Sequential RecommendationLarge Recommendation ModelComputational Advertising
Y
Yidi Wang
National Key Laboratory for Novel Software Technology, Nanjing University, China & School of Artificial Intelligence, Nanjing University, China & Polixir.ai
Si-Hang Yang
Si-Hang Yang
National Key Laboratory for Novel Software Technology, Nanjing University, China & School of Artificial Intelligence, Nanjing University, China & Polixir.ai
Shengyi Jiang
Shengyi Jiang
The University of Hong Kong
Y
Yang Yu
National Key Laboratory for Novel Software Technology, Nanjing University, China & School of Artificial Intelligence, Nanjing University, China & Polixir.ai