🤖 AI Summary
Current vision-language models (VLMs) face two key bottlenecks in robotic process planning: reliance on large-scale models or constrained simulation environments, and difficulty generating executable, perception-augmented low-level action sequences. This paper introduces SelfReVision, the first framework enabling small-scale VLMs (e.g., 3B parameters) to perform self-critique and iterative optimization without external supervision. Leveraging chain-of-thought prompting and a self-instruction paradigm, it establishes a self-distillation feedback loop wherein the model autonomously generates, evaluates, and refines action sequences. Crucially, SelfReVision eliminates dependencies on large models and simulated environments. Evaluated across multi-scale VLMs, it demonstrates that optimized small models surpass baseline models with 100× more parameters in performance, achieving significant improvements in control accuracy and generalization on embodied tasks.
📝 Abstract
Large language models (LLMs) have shown promise in robotic procedural planning, yet their human-centric reasoning often omits the low-level, grounded details needed for robotic execution. Vision-language models (VLMs) offer a path toward more perceptually grounded plans, but current methods either rely on expensive, large-scale models or are constrained to narrow simulation settings. We introduce SelfReVision, a lightweight and scalable self-improvement framework for vision-language procedural planning. SelfReVision enables small VLMs to iteratively critique, revise, and verify their own plans-without external supervision or teacher models-drawing inspiration from chain-of-thought prompting and self-instruct paradigms. Through this self-distillation loop, models generate higher-quality, execution-ready plans that can be used both at inference and for continued fine-tuning. Using models varying from 3B to 72B, our results show that SelfReVision not only boosts performance over weak base VLMs but also outperforms models 100X the size, yielding improved control in downstream embodied tasks.