Self-Improving Embodied Foundation Models

📅 2025-09-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of embodied foundation models in robotic low-level control—specifically their reliance on behavior cloning and poor generalization. We propose a two-stage post-training framework: (1) a supervised fine-tuning stage augmented with step-prediction objectives to learn transferable reward functions and success detectors; and (2) an online reinforcement learning stage incorporating a self-improvement mechanism for few-shot autonomous skill refinement. Our key contribution is the first use of step prediction for reward modeling, enabling robot swarms to autonomously explore, practice, and acquire novel skills beyond the imitation data distribution—without human annotations. Experiments on both real-world and simulated robotic platforms demonstrate significant improvements in sample efficiency and task success rates, validating effective cross-task skill generalization and overcoming fundamental constraints of prevailing behavior cloning paradigms.

Technology Category

Application Category

📝 Abstract
Foundation models trained on web-scale data have revolutionized robotics, but their application to low-level control remains largely limited to behavioral cloning. Drawing inspiration from the success of the reinforcement learning stage in fine-tuning large language models, we propose a two-stage post-training approach for robotics. The first stage, Supervised Fine-Tuning (SFT), fine-tunes pretrained foundation models using both: a) behavioral cloning, and b) steps-to-go prediction objectives. In the second stage, Self-Improvement, steps-to-go prediction enables the extraction of a well-shaped reward function and a robust success detector, enabling a fleet of robots to autonomously practice downstream tasks with minimal human supervision. Through extensive experiments on real-world and simulated robot embodiments, our novel post-training recipe unveils significant results on Embodied Foundation Models. First, we demonstrate that the combination of SFT and Self-Improvement is significantly more sample-efficient than scaling imitation data collection for supervised learning, and that it leads to policies with significantly higher success rates. Further ablations highlight that the combination of web-scale pretraining and Self-Improvement is the key to this sample-efficiency. Next, we demonstrate that our proposed combination uniquely unlocks a capability that current methods cannot achieve: autonomously practicing and acquiring novel skills that generalize far beyond the behaviors observed in the imitation learning datasets used during training. These findings highlight the transformative potential of combining pretrained foundation models with online Self-Improvement to enable autonomous skill acquisition in robotics. Our project website can be found at https://self-improving-efms.github.io .
Problem

Research questions and friction points this paper is trying to address.

Improving low-level robot control beyond behavioral cloning
Enabling autonomous skill acquisition with minimal human supervision
Combining web-scale pretraining with self-improvement for robotics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage fine-tuning with SFT and self-improvement
Steps-to-go prediction for reward shaping
Autonomous practice with minimal human supervision
🔎 Similar Papers
No similar papers found.
S
Seyed Kamyar Seyed Ghasemipour
Founding Member of Technical Staff at Generalist. Project completed April 2024 at Google DeepMind
A
Ayzaan Wahid
Google
Jonathan Tompson
Jonathan Tompson
Meta Reality Labs
Computer Science
P
Pannag Sanketi
Google
I
Igor Mordatch
Google