🤖 AI Summary
Fine-grained teleoperation in real-world settings remains hindered by slow execution, high error rates, and low reliability—challenges particularly pronounced for novice users. This work proposes a “real-to-sim-to-real” shared autonomy framework that constructs a k-nearest-neighbor human behavior proxy from less than five minutes of real teleoperation data, trains a residual co-piloting policy in simulation, and deploys it back into the physical system to assist human operators. By leveraging minimal real-world data, the approach enables stable reinforcement learning and significantly improves both task efficiency and success rates, while also generating high-quality demonstrations suitable for downstream imitation learning. Evaluated on industrial tasks such as nut threading, gear meshing, and pin insertion, the method outperforms direct teleoperation and baseline approaches relying on expert priors or behavior cloning, achieving expert-level efficiency and novice-level success rates.
📝 Abstract
Fine-grained, contact-rich teleoperation remains slow, error-prone, and unreliable in real-world manipulation tasks, even for experienced operators. Shared autonomy offers a promising way to improve performance by combining human intent with automated assistance, but learning effective assistance in simulation requires a faithful model of human behavior, which is difficult to obtain in practice. We propose a real-to-sim-to-real shared autonomy framework that augments human teleoperation with learned corrective behaviors, using a simple yet effective k-nearest-neighbor (kNN) human surrogate to model operator actions in simulation. The surrogate is fit from less than five minutes of real-world teleoperation data and enables stable training of a residual copilot policy with model-free reinforcement learning. The resulting copilot is deployed to assist human operators in real-world fine-grained manipulation tasks. Through simulation experiments and a user study with sixteen participants on industry-relevant tasks, including nut threading, gear meshing, and peg insertion, we show that our system improves task success for novice operators and execution efficiency for experienced operators compared to direct teleoperation and shared-autonomy baselines that rely on expert priors or behavioral-cloning pilots. In addition, copilot-assisted teleoperation produces higher-quality demonstrations for downstream imitation learning.