π€ AI Summary
This work addresses the limited robustness of single-shot trajectory generation under environmental variations by proposing a test-time scalable iterative optimization framework that reframes contextual imitation learning as a process of dynamically refining trajectories in response to available computational resources. The method introduces, for the first time, a test-time compute-scaling mechanism that integrates Monte Carlo Tree Search, vision-language modelβbased scoring, automatic archiving of successful trajectories, and step-level alignment feedback to enable efficient trajectory optimization. Evaluated across six simulated manipulation tasks and real-world robotic experiments, the approach demonstrates substantial improvements in generalization and robustness, with task success rates increasing with test-time computation and reaching up to 95% in complex scenarios.
π Abstract
In-context imitation learning allows robots to acquire skills from demonstrations, yet one-shot trajectory generation remains fragile under environmental variation. We propose SAIL, a framework that reframes robot imitation as an iterative refinement problem capable of scaling with test-time compute. SAIL utilizes Monte Carlo Tree Search, where each node is a complete trajectory and edges correspond to trajectory refinements. The process is guided by three core components: an automated archive of successful trajectories for contextually relevant retrieval, a vision language model-based scoring mechanism for trajectory evaluation, and a step-level feedback that provides trajectory-aligned scores for iterative refinement. Experiments across six diverse manipulation tasks in simulation and real-world validation clearly demonstrate that increasing test-time compute consistently improves success rates, achieving up to 95% on complex tasks. Our results suggest that trajectory-level test-time scaling is a robust path toward more generalizable robotic agents.