ASPERA: A Simulated Environment to Evaluate Planning for Complex Action Execution

📅 2025-07-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses two key challenges in evaluating large language models (LLMs) for complex embodied action planning: the difficulty of assessing planning capability and the scarcity of high-quality, multi-step task data. To this end, we propose ASPERA—a framework that synthesizes structured tasks with multi-step goals, dynamic environmental states, and automated verification procedures, enabled by a programmable library of simulated assistants and a human-in-the-loop data generation engine. Leveraging ASPERA, we introduce Asper-Bench, a benchmark comprising 250 high-difficulty tasks, enabling the first systematic evaluation of LLMs’ ability to transfer pre-trained programming knowledge to embodied action planning. Experimental results demonstrate that generating programs that interface with the assistant library is substantially more challenging than generic code generation, revealing critical bottlenecks in LLMs’ symbol-action alignment and environment-interaction reasoning capabilities.

Technology Category

Application Category

📝 Abstract
This work evaluates the potential of large language models (LLMs) to power digital assistants capable of complex action execution. These assistants rely on pre-trained programming knowledge to execute multi-step goals by composing objects and functions defined in assistant libraries into action execution programs. To achieve this, we develop ASPERA, a framework comprising an assistant library simulation and a human-assisted LLM data generation engine. Our engine allows developers to guide LLM generation of high-quality tasks consisting of complex user queries, simulation state and corresponding validation programs, tackling data availability and evaluation robustness challenges. Alongside the framework we release Asper-Bench, an evaluation dataset of 250 challenging tasks generated using ASPERA, which we use to show that program generation grounded in custom assistant libraries is a significant challenge to LLMs compared to dependency-free code generation.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs for complex action execution in digital assistants
Developing ASPERA to simulate assistant libraries and generate data
Assessing LLM challenges in program generation with custom libraries
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-powered digital assistants for complex actions
Simulation framework with human-guided data generation
Custom library-based program generation benchmark
🔎 Similar Papers
No similar papers found.