🤖 AI Summary
This work addresses the high computational cost, latency, and failure rates in AI agent workflows caused by redundant reasoning and repetitive tool invocations, often exacerbated by hallucinations. To mitigate these issues, we propose the Agent Workflow Optimization (AWO) framework, which, for the first time, automatically identifies recurring tool-calling patterns through trajectory analysis and abstracts them into deterministic meta-tools. This abstraction reduces the number of intermediate large language model (LLM) calls, thereby enhancing both execution efficiency and robustness. Our approach enables fully automated optimization of agent workflows, achieving up to an 11.9% reduction in LLM invocations and a 4.2 percentage point improvement in task success rate on two widely used benchmarks.
📝 Abstract
Agentic AI enables LLM to dynamically reason, plan, and interact with tools to solve complex tasks. However, agentic workflows often require many iterative reasoning steps and tool invocations, leading to significant operational expense, end-to-end latency and failures due to hallucinations. This work introduces Agent Workflow Optimization (AWO), a framework that identifies and optimizes redundant tool execution patterns to improve the efficiency and robustness of agentic workflows. AWO analyzes existing workflow traces to discover recurring sequences of tool calls and transforms them into meta-tools, which are deterministic, composite tools that bundle multiple agent actions into a single invocation. Meta-tools bypass unnecessary intermediate LLM reasoning steps and reduce operational cost while also shortening execution paths, leading to fewer failures. Experiments on two agentic AI benchmarks show that AWO reduces the number of LLM calls up to 11.9% while also increasing the task success rate by up to 4.2 percent points.