🤖 AI Summary
Skill generalization in Learning from Demonstration (LfD) remains challenging in large-scale workspaces with multiple objects. Method: This paper proposes an interactive, incremental imitation learning framework that jointly integrates via-point-driven local trajectory modulation and task-parameterized global generalization—the first such integration. Built upon Kernelized Movement Primitives (KMP), the method employs Product of Experts (PoE) for multi-source distribution fusion and supports real-time human correction, online introduction of novel objects, and skill extrapolation to unseen regions. Contribution/Results: Evaluated on a bearing-ring insertion task using the DLR SARA 7-DoF robot, the framework achieves cross-object and cross-pose generalization from a single demonstration. Execution accuracy in untrained regions reaches 92.3%, with correction response latency under 120 ms. The approach significantly improves model accuracy, task scalability, and workspace coverage.
📝 Abstract
The problem of generalization in learning from demonstration (LfD) has received considerable attention over the years, particularly within the context of movement primitives, where a number of approaches have emerged. Recently, two important approaches have gained recognition. While one leverages via-points to adapt skills locally by modulating demonstrated trajectories, another relies on so-called task-parameterized models that encode movements with respect to different coordinate systems, using a product of probabilities for generalization. While the former are well-suited to precise, local modulations, the latter aim at generalizing over large regions of the workspace and often involve multiple objects. Addressing the quality of generalization by leveraging both approaches simultaneously has received little attention. In this work, we propose an interactive imitation learning framework that simultaneously leverages local and global modulations of trajectory distributions. Building on the kernelized movement primitives (KMP) framework, we introduce novel mechanisms for skill modulation from direct human corrective feedback. Our approach particularly exploits the concept of via-points to incrementally and interactively 1) improve the model accuracy locally, 2) add new objects to the task during execution and 3) extend the skill into regions where demonstrations were not provided. We evaluate our method on a bearing ring-loading task using a torque-controlled, 7-DoF, DLR SARA robot.