A Careful Examination of Large Behavior Models for Multitask Dexterous Manipulation

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the critical lack of reproducible, statistically rigorous evaluation methods for real-world performance of multitask robotic manipulation policies—large behavior models (LBMs). We propose the first principled evaluation framework tailored to dexterous manipulation. Methodologically, building upon Diffusion Policy, we integrate simulation and real-robot data to establish a quantitative evaluation pipeline encompassing imitation learning, multitask pretraining, and blind randomized controlled trials. Our key contribution is the systematic identification of predictable scaling laws: pretraining scale and data diversity jointly enhance task generalization, robustness, and cold-start efficiency on novel tasks. Experiments demonstrate that multitask LBMs outperform single-task baselines in real-world success rate and disturbance resilience, and rapidly adapt to complex unseen tasks with only few demonstrations—empirically validating their practical deployability and scalability.

Technology Category

Application Category

📝 Abstract
Robot manipulation has seen tremendous progress in recent years, with imitation learning policies enabling successful performance of dexterous and hard-to-model tasks. Concurrently, scaling data and model size has led to the development of capable language and vision foundation models, motivating large-scale efforts to create general-purpose robot foundation models. While these models have garnered significant enthusiasm and investment, meaningful evaluation of real-world performance remains a challenge, limiting both the pace of development and inhibiting a nuanced understanding of current capabilities. In this paper, we rigorously evaluate multitask robot manipulation policies, referred to as Large Behavior Models (LBMs), by extending the Diffusion Policy paradigm across a corpus of simulated and real-world robot data. We propose and validate an evaluation pipeline to rigorously analyze the capabilities of these models with statistical confidence. We compare against single-task baselines through blind, randomized trials in a controlled setting, using both simulation and real-world experiments. We find that multi-task pretraining makes the policies more successful and robust, and enables teaching complex new tasks more quickly, using a fraction of the data when compared to single-task baselines. Moreover, performance predictably increases as pretraining scale and diversity grows. Project page: https://toyotaresearchinstitute.github.io/lbm1/
Problem

Research questions and friction points this paper is trying to address.

Evaluate multitask robot manipulation models rigorously
Compare multi-task vs single-task policies statistically
Assess impact of pretraining scale on performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extends Diffusion Policy for multitask robot manipulation
Proposes evaluation pipeline with statistical confidence
Shows pretraining scale boosts performance predictably
🔎 Similar Papers
No similar papers found.
T
TRI LBM Team
Toyota Research Institute
Jose Barreiros
Jose Barreiros
Scientist, Toyota Research Institute
Physical AIWhole-body ManipulationHaptic Intelligence.
A
Andrew Beaulieu
Toyota Research Institute
A
Aditya Bhat
Toyota Research Institute
R
Rick Cory
Toyota Research Institute
Eric Cousineau
Eric Cousineau
Toyota Research Institute
RoboticsControlPolicy SearchVisuomotor Policies
Hongkai Dai
Hongkai Dai
Toyota Research Institute
Robotics
C
Ching-Hsin Fang
Toyota Research Institute
K
Kunimatsu Hashimoto
Toyota Research Institute
Muhammad Zubair Irshad
Muhammad Zubair Irshad
Toyota Research Institute | Georgia Institute of Technology
3D VisionRobot LearningFoundation ModelsGenerative AIMultimodal AI
Masha Itkina
Masha Itkina
Toyota Research Institute (TRI)
artificial intelligenceperceptionautonomous vehicles
Naveen Kuppuswamy
Naveen Kuppuswamy
Toyota Research Institute, USA.
Robot Manipulationtactile perceptionlarge behavior models
Kuan-Hui Lee
Kuan-Hui Lee
Toyota Research Institute
Katherine Liu
Katherine Liu
Research Scientist @ Toyota Research Institute
RoboticsComputer VisionMachine LearningNeural FieldsManipulation
D
Dale McConachie
Toyota Research Institute
I
Ian McMahon
Toyota Research Institute
Haruki Nishimura
Haruki Nishimura
Toyota Research Institute
roboticsmachine learningplanning under uncertaintystatisticsprobabilistic inference
C
Calder Phillips-Grafflin
Toyota Research Institute
Charles Richter
Charles Richter
Toyota Research Institute
P
Paarth Shah
Toyota Research Institute
Krishnan Srinivasan
Krishnan Srinivasan
PhD Student, Stanford University
LearningRoboticsComputational Biology
B
Blake Wulfe
Toyota Research Institute
C
Chen Xu
Toyota Research Institute
M
Mengchao Zhang
Toyota Research Institute
Alex Alspach
Alex Alspach
Toyota Research Institute