🤖 AI Summary
This work addresses the challenge of combinatorial explosion in long-horizon, multi-step tool planning and the difficulty of effectively reusing previously successful trajectories. Inspired by ant colony optimization, it introduces a pheromone-inspired mechanism into large language model (LLM) agents for tool use planning. By explicitly modeling tool transition patterns at the trajectory level, the proposed method captures and continuously leverages historical successful paths to guide policy optimization. The resulting approach, termed PhGPO, significantly outperforms existing baselines across multiple long-horizon tool-use tasks, demonstrating substantial improvements in both planning efficiency and success rate.
📝 Abstract
Recent advancements in Large Language Model (LLM) agents have demonstrated strong capabilities in executing complex tasks through tool use. However, long-horizon multi-step tool planning is challenging, because the exploration space suffers from a combinatorial explosion. In this scenario, even when a correct tool-use path is found, it is usually considered an immediate reward for current training, which would not provide any reusable information for subsequent training. In this paper, we argue that historically successful trajectories contain reusable tool-transition patterns, which can be leveraged throughout the whole training process. Inspired by ant colony optimization where historically successful paths can be reflected by the pheromone, we propose Pheromone-Guided Policy Optimization (PhGPO), which learns a trajectory-based transition pattern (i.e., pheromone) from historical trajectories and then uses the learned pheromone to guide policy optimization. This learned pheromone provides explicit and reusable guidance that steers policy optimization toward historically successful tool transitions, thereby improving long-horizon tool planning. Comprehensive experimental results demonstrate the effectiveness of our proposed PhGPO.