🤖 AI Summary
This work addresses the challenge of efficient parallel scheduling for large language models (LLMs) in complex tasks. We propose a novel “planning-on-graph” paradigm: first automatically decomposing natural-language tasks into subtasks and constructing an abstract task graph; then generating a parallelizable execution schedule grounded in the graph structure. Our key innovations include a task-graph-driven controllable synthetic data generation pipeline and a two-stage supervised fine-tuning framework—comprising graph understanding and scheduling generation—enhanced by graph-structured prompting and synthetic data augmentation. These techniques collectively improve the model’s generalization to arbitrary-scale task graphs. Experiments demonstrate substantial gains in parallel task completion rate and global execution efficiency on both API-based and open-source trainable LLMs, enabling standardized graph representation and fully automated parallel scheduling.
📝 Abstract
Large Language Models (LLMs) have demonstrated exceptional abilities in reasoning for task planning. However, challenges remain under-explored for parallel schedules. This paper introduces a novel paradigm, plan-over-graph, in which the model first decomposes a real-life textual task into executable subtasks and constructs an abstract task graph. The model then understands this task graph as input and generates a plan for parallel execution. To enhance the planning capability of complex, scalable graphs, we design an automated and controllable pipeline to generate synthetic graphs and propose a two-stage training scheme. Experimental results show that our plan-over-graph method significantly improves task performance on both API-based LLMs and trainable open-sourced LLMs. By normalizing complex tasks as graphs, our method naturally supports parallel execution, demonstrating global efficiency. The code and data are available at https://github.com/zsq259/Plan-over-Graph.