Beyond Behavior Cloning: Robustness through Interactive Imitation and Contrastive Learning

📅 2025-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Behavior cloning (BC) is prone to overfitting and failure under noisy demonstrations, particularly in implicit BC approaches such as energy-based models. To address this, we propose CLIC—a framework that reformulates BC as an interactive, correction-driven iterative process for optimal action estimation. CLIC leverages binary human feedback (“correct” or “accept”) on policy outputs to dynamically refine the action distribution and optimize the energy function. We provide theoretical convergence guarantees for both single- and multi-optimal-action settings and demonstrate support for heterogeneous non-demonstration feedback (e.g., attribute-based descriptions). Experiments show that CLIC significantly improves training stability of energy-based models and achieves superior robustness to demonstration noise, as well as enhanced generalization across diverse feedback modalities, in both simulated and real-world robotic tasks.

Technology Category

Application Category

📝 Abstract
Behavior cloning (BC) traditionally relies on demonstration data, assuming the demonstrated actions are optimal. This can lead to overfitting under noisy data, particularly when expressive models are used (e.g., the energy-based model in Implicit BC). To address this, we extend behavior cloning into an iterative process of optimal action estimation within the Interactive Imitation Learning framework. Specifically, we introduce Contrastive policy Learning from Interactive Corrections (CLIC). CLIC leverages human corrections to estimate a set of desired actions and optimizes the policy to select actions from this set. We provide theoretical guarantees for the convergence of the desired action set to optimal actions in both single and multiple optimal action cases. Extensive simulation and real-robot experiments validate CLIC's advantages over existing state-of-the-art methods, including stable training of energy-based models, robustness to feedback noise, and adaptability to diverse feedback types beyond demonstrations. Our code will be publicly available soon.
Problem

Research questions and friction points this paper is trying to address.

Enhance behavior cloning robustness
Optimize policy with human corrections
Ensure adaptability to diverse feedback
Innovation

Methods, ideas, or system contributions that make the work stand out.

Interactive Imitation Learning framework
Contrastive policy Learning corrections
Robustness to feedback noise
🔎 Similar Papers
2024-06-12arXiv.orgCitations: 0
Zhaoting Li
Zhaoting Li
TU Delft, Cognitive Robotics
RoboticsImitation LearningMotion PlanningHuman robot interaction
R
Rodrigo P'erez-Dattari
Delft University of Technology
R
R. Babuška
Delft University of Technology
C
C. D. Santina
Delft University of Technology
Jens Kober
Jens Kober
Associate Professor, CoR, TU Delft
RoboticsMachine Learning