🤖 AI Summary
This work addresses the challenge of sparse high-quality trajectories and the difficulty of balancing exploration quality with sample efficiency in long-horizon agent tasks. To this end, the authors propose Spark, a novel framework that synergistically integrates reinforcement learning with large language models. Spark introduces a policy-aware critical state detection mechanism that dynamically branches at pivotal decision points, enabling adaptive and precise exploration without relying on human priors. This approach significantly enhances the sampling efficiency of high-value trajectories. Experimental results demonstrate that Spark achieves substantially higher task success rates with fewer samples in embodied planning tasks and exhibits strong generalization capabilities in unseen scenarios.
📝 Abstract
Reinforcement learning has empowered large language models to act as intelligent agents, yet training them for long-horizon tasks remains challenging due to the scarcity of high-quality trajectories, especially under limited resources. Existing methods typically scale up rollout sizes and indiscriminately allocate computational resources among intermediate steps. Such attempts inherently waste substantial computation budget on trivial steps while failing to guarantee sample quality. To address this, we propose \textbf{Spark} (\textbf{S}trategic \textbf{P}olicy-\textbf{A}ware explo\textbf{R}ation via \textbf{K}ey-state dynamic branching), a novel framework that selectively branches at critical decision states for resource-efficient exploration. Our key insight is to activate adaptive branching exploration at critical decision points to probe promising trajectories, thereby achieving precise resource allocation that prioritizes sampling quality over blind coverage. This design leverages the agent's intrinsic decision-making signals to reduce dependence on human priors, enabling the agent to autonomously expand exploration and achieve stronger generalization. Experiments across diverse tasks (e.g., embodied planning), demonstrate that \textsc{Spark} achieves superior success rates with significantly fewer training samples, exhibiting robust generalization even in unseen scenarios.