🤖 AI Summary
To address the scarcity of real-world interaction data and the high cost of manually designing simulation tasks in general-purpose robotic learning, this paper introduces the first fully automated task and dataset generation framework. The framework integrates GPU-accelerated parallel physics simulation with multimodal foundation models (VLMs and LLMs) to enable end-to-end automation of task design, scene generation, and expert demonstration synthesis. We propose the ViPR family of agents—ViPR, ViPR-Eureka, and ViPR-RL—which incorporate VLM-in-the-loop planning, LLM-guided contact sampling, and hybrid policy learning under sparse rewards, thereby significantly reducing reliance on human intervention in conventional sim-to-real pipelines. Critically, policies trained on generated data deploy directly to physical robots without fine-tuning. Evaluated on pick-and-place, drawer opening, contact-intensive pushing, and long-horizon manipulation tasks, the approach achieves an average success rate of 44%.
📝 Abstract
Generalist robot learning remains constrained by data: large-scale, diverse, and high-quality interaction data are expensive to collect in the real world. While simulation has become a promising way for scaling up data collection, the related tasks, including simulation task design, task-aware scene generation, expert demonstration synthesis, and sim-to-real transfer, still demand substantial human effort. We present AnyTask, an automated framework that pairs massively parallel GPU simulation with foundation models to design diverse manipulation tasks and synthesize robot data. We introduce three AnyTask agents for generating expert demonstrations aiming to solve as many tasks as possible: 1) ViPR, a novel task and motion planning agent with VLM-in-the-loop Parallel Refinement; 2) ViPR-Eureka, a reinforcement learning agent with generated dense rewards and LLM-guided contact sampling; 3) ViPR-RL, a hybrid planning and learning approach that jointly produces high-quality demonstrations with only sparse rewards. We train behavior cloning policies on generated data, validate them in simulation, and deploy them directly on real robot hardware. The policies generalize to novel object poses, achieving 44% average success across a suite of real-world pick-and-place, drawer opening, contact-rich pushing, and long-horizon manipulation tasks. Our project website is at https://anytask.rai-inst.com .