🤖 AI Summary
Behavior cloning (BC) suffers from poor generalization to unseen tasks and heavy reliance on abundant task-labeled demonstrations. To address this, we propose the “blindfolded expert” paradigm: experts deliberately ignore high-level task identifiers during demonstration, providing only low-level observations (e.g., pixels or sensor readings). Theoretical analysis shows that this information constraint tightens the upper bound on generalization error and enhances cross-task transferability. Experiments on real-robot peg-insertion tasks and the Procgen video game benchmark demonstrate that models trained on fewer, information-restricted demonstrations achieve significantly improved zero-shot generalization to novel tasks. This work is the first to systematically reveal the detrimental impact of task-identifier redundancy on BC generalization. It establishes a new, theoretically grounded paradigm for efficient and scalable imitation learning—shifting focus from task-conditioned supervision to invariant, observation-only expert behavior.
📝 Abstract
Behavioral cloning is a simple yet effective technique for learning sequential decision-making from demonstrations. Recently, it has gained prominence as the core of foundation models for the physical world, where achieving generalization requires countless demonstrations of a multitude of tasks. Typically, a human expert with full information on the task demonstrates a (nearly) optimal behavior. In this paper, we propose to hide some of the task's information from the demonstrator. This ``blindfolded'' expert is compelled to employ non-trivial exploration to solve the task. We show that cloning the blindfolded expert generalizes better to unseen tasks than its fully-informed counterpart. We conduct experiments of real-world robot peg insertion tasks with (limited) human demonstrations, alongside videogames from the Procgen benchmark. Additionally, we support our findings with theoretical analysis, which confirms that the generalization error scales with $sqrt{I/m}$, where $I$ measures the amount of task information available to the demonstrator, and $m$ is the number of demonstrated tasks. Both theory and practice indicate that cloning blindfolded experts generalizes better with fewer demonstrated tasks. Project page with videos and code: https://sites.google.com/view/blindfoldedexperts/home