🤖 AI Summary
This work addresses two key challenges in evaluating large language models (LLMs) for complex embodied action planning: the difficulty of assessing planning capability and the scarcity of high-quality, multi-step task data. To this end, we propose ASPERA—a framework that synthesizes structured tasks with multi-step goals, dynamic environmental states, and automated verification procedures, enabled by a programmable library of simulated assistants and a human-in-the-loop data generation engine. Leveraging ASPERA, we introduce Asper-Bench, a benchmark comprising 250 high-difficulty tasks, enabling the first systematic evaluation of LLMs’ ability to transfer pre-trained programming knowledge to embodied action planning. Experimental results demonstrate that generating programs that interface with the assistant library is substantially more challenging than generic code generation, revealing critical bottlenecks in LLMs’ symbol-action alignment and environment-interaction reasoning capabilities.
📝 Abstract
This work evaluates the potential of large language models (LLMs) to power digital assistants capable of complex action execution. These assistants rely on pre-trained programming knowledge to execute multi-step goals by composing objects and functions defined in assistant libraries into action execution programs. To achieve this, we develop ASPERA, a framework comprising an assistant library simulation and a human-assisted LLM data generation engine. Our engine allows developers to guide LLM generation of high-quality tasks consisting of complex user queries, simulation state and corresponding validation programs, tackling data availability and evaluation robustness challenges. Alongside the framework we release Asper-Bench, an evaluation dataset of 250 challenging tasks generated using ASPERA, which we use to show that program generation grounded in custom assistant libraries is a significant challenge to LLMs compared to dependency-free code generation.