🤖 AI Summary
Existing benchmarks rely on predefined tool specifications, making it difficult to evaluate language agents’ ability to autonomously generate and evolve tools from abstract task requirements, and they lack fine-grained diagnostics of failure modes. This work proposes the first multidimensional diagnostic benchmark for autonomous tool creation, which—without assuming any predefined specifications—employs a task-driven evaluation framework to systematically quantify agent performance across dimensions including interface compliance, functional correctness, and downstream utility. Experimental results reveal that state-of-the-art models struggle to generate precise interfaces or executable logic in a single attempt; minor initial errors are significantly amplified, substantially degrading downstream performance. These findings highlight a critical limitation in current agents’ capacity for autonomous tool creation and evolution.
📝 Abstract
Research on self-evolving language agents has accelerated, drawing increasing attention to their ability to create, adapt, and maintain tools from task requirements. However, existing benchmarks predominantly rely on predefined specifications, which limits scalability and hinders truly autonomous evolution. While recent studies attempt to dynamically generate tools, they primarily emphasize downstream performance, resulting in a"black-box"evaluation that makes it difficult to attribute failures to specific causes. To address this, we propose Tool-Genesis, a diagnostic benchmark designed to quantify agent capabilities across multiple dimensions, including interface compliance, functional correctness, and downstream utility. Tool-Genesis evaluates whether agents can construct task-relevant tools solely from abstract requirements (without preset specifications) and use them to solve realistic problems. Crucially, we find that even state-of-the-art models struggle to produce precise tool interfaces or executable logic in a one-shot setting. These minor initial flaws are amplified through the pipeline, leading to a sharp degradation in downstream metrics. We hope Tool-Genesis will guide future research toward training and steering models to synthesize persistent, general-purpose tools that better address real-world challenges.