🤖 AI Summary
This study addresses the critical lack of reproducible, statistically rigorous evaluation methods for real-world performance of multitask robotic manipulation policies—large behavior models (LBMs). We propose the first principled evaluation framework tailored to dexterous manipulation. Methodologically, building upon Diffusion Policy, we integrate simulation and real-robot data to establish a quantitative evaluation pipeline encompassing imitation learning, multitask pretraining, and blind randomized controlled trials. Our key contribution is the systematic identification of predictable scaling laws: pretraining scale and data diversity jointly enhance task generalization, robustness, and cold-start efficiency on novel tasks. Experiments demonstrate that multitask LBMs outperform single-task baselines in real-world success rate and disturbance resilience, and rapidly adapt to complex unseen tasks with only few demonstrations—empirically validating their practical deployability and scalability.
📝 Abstract
Robot manipulation has seen tremendous progress in recent years, with imitation learning policies enabling successful performance of dexterous and hard-to-model tasks. Concurrently, scaling data and model size has led to the development of capable language and vision foundation models, motivating large-scale efforts to create general-purpose robot foundation models. While these models have garnered significant enthusiasm and investment, meaningful evaluation of real-world performance remains a challenge, limiting both the pace of development and inhibiting a nuanced understanding of current capabilities. In this paper, we rigorously evaluate multitask robot manipulation policies, referred to as Large Behavior Models (LBMs), by extending the Diffusion Policy paradigm across a corpus of simulated and real-world robot data. We propose and validate an evaluation pipeline to rigorously analyze the capabilities of these models with statistical confidence. We compare against single-task baselines through blind, randomized trials in a controlled setting, using both simulation and real-world experiments. We find that multi-task pretraining makes the policies more successful and robust, and enables teaching complex new tasks more quickly, using a fraction of the data when compared to single-task baselines. Moreover, performance predictably increases as pretraining scale and diversity grows. Project page: https://toyotaresearchinstitute.github.io/lbm1/