Gecko: A Simulation Environment with Stateful Feedback for Refining Agent Tool Calls

πŸ“… 2026-02-22
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge that large language models (LLMs) frequently make errors in tool calling, while iterative optimization using real tools is costly and poses safety risks. To overcome this, we propose Geckoβ€”the first simulation environment for tool calling that supports stateful feedback. Gecko integrates rule-based validation with LLM reasoning to verify the correctness of tool names and parameters, and generates plausible responses according to predefined output schemas, thereby providing task-oriented, state-aware feedback. Building upon this environment, we develop GATS, a test-time scaling method that significantly enhances the tool-use performance of models such as GPT-4o, GPT-5, and Gemini-3.0-pro on BFCLv3 and τ²-bench, enabling safe, efficient, and low-cost iterative refinement.

Technology Category

Application Category

πŸ“ Abstract
The ability to use tools is fundamental for large language model (LLM) agents. Given a task, existing systems use LLMs to plan and generate tool calls, which are executed by real-world tools to complete the task. However, tool calls are prone to errors because they are derived merely from LLM intrinsic capabilities. What is more, while it is useful to let LLMs iteratively refine the tool-call sequence using execution results from real tools, this process can be expensive and lead to unsafe results. To improve LLM tool calls and address issues caused by using real tools for refinement, we introduce Gecko, a comprehensive environment that simulates tool responses using a combination of rules and LLMs. Specifically, Gecko checks the validity of tool calls including input arguments and tool names, synthesizes reasonable responses that adhere to the output schema, and assesses whether all task objectives have been achieved. These three types of feedback provided by Gecko allow LLMs to refine their tool calls, forming a simple yet effective test-time scaling method named GATS. On BFCLv3 and $Ο„^2$-bench, GATS consistently improves the tool calling performance of various LLMs including GPT-4o, GPT-5, and Gemini-3.0-pro. We further discuss working mechanisms of our method and share future possibilities.
Problem

Research questions and friction points this paper is trying to address.

tool calling
large language models
simulation environment
stateful feedback
agent refinement
Innovation

Methods, ideas, or system contributions that make the work stand out.

simulation environment
tool calling
stateful feedback
test-time scaling
LLM agent
πŸ”Ž Similar Papers
No similar papers found.