🤖 AI Summary
This work addresses the limitations of open-source large language models in complex terminal tasks, which stem from the absence of high-fidelity executable environments and robust trajectory data containing error-recovery behaviors. To overcome this, we propose TermiGen, the first framework that integrates multi-agent iterative generation of Dockerized terminal environments with an active error injection mechanism. Through a Generator-Critic protocol, TermiGen synthesizes expert trajectories rich in corrective actions. Fine-tuning on this data yields TermiGen-Qwen2.5-Coder-32B, which achieves a 31.3% pass rate on TerminalBench—establishing a new state-of-the-art among open-source models and surpassing even closed-source counterparts such as o1-mini—demonstrating significantly enhanced robustness and recovery capabilities in real-world terminal scenarios.
📝 Abstract
Executing complex terminal tasks remains a significant challenge for open-weight LLMs, constrained by two fundamental limitations. First, high-fidelity, executable training environments are scarce: environments synthesized from real-world repositories are not diverse and scalable, while trajectories synthesized by LLMs suffer from hallucinations. Second, standard instruction tuning uses expert trajectories that rarely exhibit simple mistakes common to smaller models. This creates a distributional mismatch, leaving student models ill-equipped to recover from their own runtime failures. To bridge these gaps, we introduce TermiGen, an end-to-end pipeline for synthesizing verifiable environments and resilient expert trajectories. Termi-Gen first generates functionally valid tasks and Docker containers via an iterative multi-agent refinement loop. Subsequently, we employ a Generator-Critic protocol that actively injects errors during trajectory collection, synthesizing data rich in error-correction cycles. Fine-tuned on this TermiGen-generated dataset, our TermiGen-Qwen2.5-Coder-32B achieves a 31.3% pass rate on TerminalBench. This establishes a new open-weights state-of-the-art, outperforming existing baselines and notably surpassing capable proprietary models such as o4-mini. Dataset is avaiable at https://github.com/ucsb-mlsec/terminal-bench-env.