🤖 AI Summary
To address the challenges of sparse rewards, long-horizon credit assignment, and high computational cost in reinforcement learning (RL) for complex multi-step task planning with large language models (LLMs), this paper proposes a “multi-turn-to-single-turn” training paradigm. It compresses multi-step planning into single-step reasoning, leverages expert trajectories to generate dense, verifiable per-episode rewards, and employs Group Relative Policy Optimization (GRPO) for cross-step reward attribution and stable policy updates. Theoretically, the method improves multi-turn success rates and enables transfer to short-horizon subtasks. Evaluated via end-to-end training on a 1.5B-parameter LLM, it achieves 70% success on long-horizon (≥30-step) benchmarks—significantly outperforming a 14B-parameter baseline—and demonstrates full generalization to simpler subtasks.
📝 Abstract
Large Language Models (LLMs) have demonstrated remarkable capabilities in knowledge acquisition, reasoning, and tool use, making them promising candidates for autonomous agent applications. However, training LLM agents for complex multi-turn task planning faces significant challenges, including sparse episode-wise rewards, credit assignment across long horizons, and the computational overhead of reinforcement learning in multi-turn interaction settings. To this end, this paper introduces a novel approach that transforms multi-turn task planning into single-turn task reasoning problems, enabling efficient policy optimization through Group Relative Policy Optimization (GRPO) with dense and verifiable reward from expert trajectories. Our theoretical analysis shows that GRPO improvement on single-turn task reasoning results in higher multi-turn success probability under the minimal turns, as well as the generalization to subtasks with shorter horizons. Experimental evaluation on the complex task planning benchmark demonstrates that our 1.5B parameter model trained with single-turn GRPO achieves superior performance compared to larger baseline models up to 14B parameters, with success rates of 70% for long-horizon planning tasks with over 30 steps. We also theoretically and empirically validate the strong cross-task generalizability that the models trained on complex tasks can lead to the successful completion of all simpler subtasks.