🤖 AI Summary
Constructing large-scale, executable, and verifiable agent trajectories in terminal environments is hindered by environmental heterogeneity and the lack of standardized validation protocols. This work proposes TerminalTraj, a pipeline that automatically constructs Dockerized execution environments from high-quality code repositories, aligns task instances, and synthesizes agent trajectories accompanied by executable verification scripts. To our knowledge, this is the first approach capable of generating large-scale, executable, and verifiable terminal trajectories autonomously. The project has produced 32K Docker images and 50,733 validated trajectories spanning eight domains. Evaluation on TerminalBench demonstrates that Qwen2.5-Coder achieves performance gains of up to 20%, while TerminalTraj-32B sets a new state-of-the-art among models with fewer than 100 billion parameters.
📝 Abstract
Training agentic models for terminal-based tasks critically depends on high-quality terminal trajectories that capture realistic long-horizon interactions across diverse domains. However, constructing such data at scale remains challenging due to two key requirements: \textbf{\emph{Executability}}, since each instance requires a suitable and often distinct Docker environment; and \textbf{\emph{Verifiability}}, because heterogeneous task outputs preclude unified, standardized verification. To address these challenges, we propose \textbf{TerminalTraj}, a scalable pipeline that (i) filters high-quality repositories to construct Dockerized execution environments, (ii) generates Docker-aligned task instances, and (iii) synthesizes agent trajectories with executable validation code. Using TerminalTraj, we curate 32K Docker images and generate 50,733 verified terminal trajectories across eight domains. Models trained on this data with the Qwen2.5-Coder backbone achieve consistent performance improvements on TerminalBench (TB), with gains of up to 20\% on TB~1.0 and 10\% on TB~2.0 over their respective backbones. Notably, \textbf{TerminalTraj-32B} achieves strong performance among models with fewer than 100B parameters, reaching 35.30\% on TB~1.0 and 22.00\% on TB~2.0, and demonstrates improved test-time scaling behavior. All code and data are available at https://github.com/Wusiwei0410/TerminalTraj.