PhGPO: Pheromone-Guided Policy Optimization for Long-Horizon Tool Planning

📅 2026-02-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of combinatorial explosion in long-horizon, multi-step tool planning and the difficulty of effectively reusing previously successful trajectories. Inspired by ant colony optimization, it introduces a pheromone-inspired mechanism into large language model (LLM) agents for tool use planning. By explicitly modeling tool transition patterns at the trajectory level, the proposed method captures and continuously leverages historical successful paths to guide policy optimization. The resulting approach, termed PhGPO, significantly outperforms existing baselines across multiple long-horizon tool-use tasks, demonstrating substantial improvements in both planning efficiency and success rate.

Technology Category

Application Category

📝 Abstract
Recent advancements in Large Language Model (LLM) agents have demonstrated strong capabilities in executing complex tasks through tool use. However, long-horizon multi-step tool planning is challenging, because the exploration space suffers from a combinatorial explosion. In this scenario, even when a correct tool-use path is found, it is usually considered an immediate reward for current training, which would not provide any reusable information for subsequent training. In this paper, we argue that historically successful trajectories contain reusable tool-transition patterns, which can be leveraged throughout the whole training process. Inspired by ant colony optimization where historically successful paths can be reflected by the pheromone, we propose Pheromone-Guided Policy Optimization (PhGPO), which learns a trajectory-based transition pattern (i.e., pheromone) from historical trajectories and then uses the learned pheromone to guide policy optimization. This learned pheromone provides explicit and reusable guidance that steers policy optimization toward historically successful tool transitions, thereby improving long-horizon tool planning. Comprehensive experimental results demonstrate the effectiveness of our proposed PhGPO.
Problem

Research questions and friction points this paper is trying to address.

long-horizon tool planning
combinatorial explosion
tool-use trajectories
policy optimization
reusable guidance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pheromone-Guided Policy Optimization
Long-Horizon Tool Planning
Tool-Use Trajectories
Reinforcement Learning
Ant Colony Optimization
🔎 Similar Papers
No similar papers found.
Yu Li
Yu Li
Southeast University, Monash University
Natural Language ProcessingLarge Language Models
G
Guangfeng Cai
School of Computer Science and Engineering, Southeast University, Nanjing, China
S
Shengtian Yang
School of Computer Science and Engineering, Southeast University, Nanjing, China
H
Han Luo
School of Computer Science and Engineering, Southeast University, Nanjing, China
S
Shuo Han
Huawei Noah’s Ark Lab, China
Xu He
Xu He
Huawei Noah' Ark Lab
Reinforcement learningArtificial intelligence
Dong Li
Dong Li
Huawei Noah's Ark Lab
Reinforcement learningLLM Alignment
Lei Feng
Lei Feng
Professor, Southeast University
Machine LearningData ScienceStatistics