SAIL: Test-Time Scaling for In-Context Imitation Learning with VLM

πŸ“… 2026-03-09
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the limited robustness of single-shot trajectory generation under environmental variations by proposing a test-time scalable iterative optimization framework that reframes contextual imitation learning as a process of dynamically refining trajectories in response to available computational resources. The method introduces, for the first time, a test-time compute-scaling mechanism that integrates Monte Carlo Tree Search, vision-language model–based scoring, automatic archiving of successful trajectories, and step-level alignment feedback to enable efficient trajectory optimization. Evaluated across six simulated manipulation tasks and real-world robotic experiments, the approach demonstrates substantial improvements in generalization and robustness, with task success rates increasing with test-time computation and reaching up to 95% in complex scenarios.

Technology Category

Application Category

πŸ“ Abstract
In-context imitation learning allows robots to acquire skills from demonstrations, yet one-shot trajectory generation remains fragile under environmental variation. We propose SAIL, a framework that reframes robot imitation as an iterative refinement problem capable of scaling with test-time compute. SAIL utilizes Monte Carlo Tree Search, where each node is a complete trajectory and edges correspond to trajectory refinements. The process is guided by three core components: an automated archive of successful trajectories for contextually relevant retrieval, a vision language model-based scoring mechanism for trajectory evaluation, and a step-level feedback that provides trajectory-aligned scores for iterative refinement. Experiments across six diverse manipulation tasks in simulation and real-world validation clearly demonstrate that increasing test-time compute consistently improves success rates, achieving up to 95% on complex tasks. Our results suggest that trajectory-level test-time scaling is a robust path toward more generalizable robotic agents.
Problem

Research questions and friction points this paper is trying to address.

in-context imitation learning
trajectory generation
environmental variation
test-time scaling
robotic generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

test-time scaling
in-context imitation learning
Monte Carlo Tree Search
vision language model
trajectory refinement
πŸ”Ž Similar Papers
No similar papers found.