LAM SIMULATOR: Advancing Data Generation for Large Action Model Training via Online Exploration and Trajectory Feedback

๐Ÿ“… 2025-06-02
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
High-quality, multi-step task data for training Large Action Models (LAMs) is scarce, hindering scalable and effective LAM development. Method: This paper proposes an intelligent agent simulation framework supporting online exploration and closed-loop feedback. It introduces two novel mechanisms: dynamic task query generation and trajectory self-evolution, enabling LLM-based agents to autonomously invoke tools, respond to real-time feedback, and generate diverse action trajectories within an interactive simulation environment. Trajectory distillation and quality filtering further ensure efficient, high-fidelity data construction with minimal human involvement. Contribution/Results: The approach drastically reduces reliance on manual annotationโ€”data production requires near-zero human intervention. When training LAMs on the generated data, performance improves by up to 49.3% on ToolBench and CRMArena benchmarks, consistently surpassing original baselines across all evaluated metrics.

Technology Category

Application Category

๐Ÿ“ Abstract
Large Action Models (LAMs) for AI Agents offer incredible potential but face challenges due to the need for high-quality training data, especially for multi-steps tasks that involve planning, executing tool calls, and responding to feedback. To address these issues, we present LAM SIMULATOR, a comprehensive framework designed for online exploration of agentic tasks with high-quality feedback. Our framework features a dynamic task query generator, an extensive collection of tools, and an interactive environment where Large Language Model (LLM) Agents can call tools and receive real-time feedback. This setup enables LLM Agents to explore and solve tasks autonomously, facilitating the discovery of multiple approaches to tackle any given task. The resulting action trajectory data are then used to create high-quality training datasets for LAMs. Our experiments on popular agentic benchmarks, ToolBench and CRMArena, highlight the effectiveness of LAM SIMULATOR: models trained with self-generated datasets using our framework achieve significant performance gains, up to a 49.3% improvement over their original baselines. LAM SIMULATOR requires minimal human input during dataset creation, highlighting LAM SIMULATOR's efficiency and effectiveness in speeding up development of AI agents.
Problem

Research questions and friction points this paper is trying to address.

Generating high-quality training data for Large Action Models (LAMs)
Enabling autonomous exploration of multi-step tasks with real-time feedback
Reducing human input in dataset creation for AI agent development
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online exploration with real-time feedback
Dynamic task query generator
Autonomous tool calls and trajectory data
๐Ÿ”Ž Similar Papers
No similar papers found.