Self-Adapting Improvement Loops for Robotic Learning

📅 2025-06-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address robots’ poor generalization to unseen tasks, this paper proposes the Self-Adaptive Iterative Loop (SAIL): leveraging internet-scale pre-trained video models to guide real-world robots in autonomously collecting task trajectories, and iteratively refining a lightweight, domain-specific video planner online. SAIL establishes the first closed-loop, self-improving video planner on physical robots—requiring no human-annotated data for novel tasks and instead relying solely on self-generated experience and cross-domain knowledge transfer to continuously enhance generalization. In experiments on MetaWorld multitask benchmarks and real robotic arm manipulation, SAIL achieves consistent multi-round performance gains on initially unseen tasks, demonstrating strong robustness to experience filtering strategies and initial demonstration quality. The core contribution is an end-to-end, unsupervised, deployable video-driven framework enabling autonomous robot evolution.

Technology Category

Application Category

📝 Abstract
Video generative models trained on expert demonstrations have been utilized as performant text-conditioned visual planners for solving robotic tasks. However, generalization to unseen tasks remains a challenge. Whereas improved generalization may be facilitated by leveraging learned prior knowledge from additional pre-collected offline data sources, such as web-scale video datasets, in the era of experience we aim to design agents that can continuously improve in an online manner from self-collected behaviors. In this work we thus propose the Self-Adapting Improvement Loop (SAIL), where an in-domain video model iteratively updates itself on self-produced trajectories, collected through adaptation with an internet-scale pretrained video model, and steadily improves its performance for a specified task of interest. We apply SAIL to a diverse suite of MetaWorld tasks, as well as two manipulation tasks on a real robot arm, and find that performance improvements continuously emerge over multiple iterations for novel tasks initially unseen during original in-domain video model training. Furthermore, we discover that SAIL is surprisingly robust regarding if and how the self-collected experience is filtered, and the quality of the initial in-domain demonstrations. Through adaptation with summarized internet-scale data, and learning through online experience, we thus demonstrate a way to iteratively bootstrap a high-performance video model for solving novel robotic tasks through self-improvement.
Problem

Research questions and friction points this paper is trying to address.

Improving generalization of robotic tasks to unseen scenarios
Enabling continuous online self-improvement for robotic agents
Leveraging internet-scale data for iterative video model adaptation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-Adapting Improvement Loop (SAIL) for robotics
Iterative self-updating with internet-scale video model
Online learning from self-collected robotic behaviors
🔎 Similar Papers
No similar papers found.