Heddle: A Distributed Orchestration System for Agentic RL Rollout

📅 2026-03-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the long-tail trajectory bottleneck in agent-based reinforcement learning—caused by frequent tool invocations leading to queuing delays, interference overhead, and per-token processing inflation—by proposing a trajectory-centric distributed orchestration system. The system introduces an integrated triad of trajectory-level scheduling, trajectory-aware placement, and trajectory-adaptive resource management, overcoming limitations of conventional step-centric designs. It optimizes the timing, location, and manner of execution during rollback phases through runtime prediction with progressive priority scheduling, pre-sorted dynamic programming combined with opportunistic migration during idle periods, and dynamic model-parallel tuning. Experimental results demonstrate that, across diverse agent RL workloads, the approach achieves up to a 2.5× improvement in end-to-end rollback throughput, substantially mitigating long-tail trajectory issues.
📝 Abstract
Agentic Reinforcement Learning (RL) enables LLMs to solve complex tasks by alternating between a data-collection rollout phase and a policy training phase. During rollout, the agent generates trajectories, i.e., multi-step interactions between LLMs and external tools. Yet, frequent tool calls induce long-tailed trajectory generation that bottlenecks rollouts. This stems from step-centric designs that ignore trajectory context, triggering three system problems for long-tail trajectory generation: queueing delays, interference overhead, and inflated per-token time. We propose Heddle, a trajectory-centric system to optimize the when, where, and how of agentic rollout execution. Heddle integrates three core mechanisms: trajectory-level scheduling using runtime prediction and progressive priority to minimize cumulative queueing; trajectory-aware placement via presorted dynamic programming and opportunistic migration during idle tool call intervals to minimize interference; and trajectory-adaptive resource manager that dynamically tunes model parallelism to accelerate the per-token time of long-tail trajectories while maintaining high throughput for short trajectories. Evaluations across diverse agentic RL workloads demonstrate that Heddle effectively neutralizes the long-tail bottleneck, achieving up to 2.5$\times$ higher end-to-end rollout throughput compared to state-of-the-art baselines.
Problem

Research questions and friction points this paper is trying to address.

Agentic Reinforcement Learning
long-tail trajectory
rollout bottleneck
tool calls
trajectory generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

trajectory-centric scheduling
agentic reinforcement learning
distributed orchestration
adaptive resource management
long-tail trajectory optimization