🤖 AI Summary
This work addresses the challenge of balancing safety and efficiency in contact-intensive teleoperation, where conventional impedance control often falls short. The authors propose a vision-driven shared control strategy in which the human operator governs pose commands while the system dynamically modulates a direction-dependent stiffness matrix in real time based solely on visual input, without requiring actual contact measurements. Notably, this approach achieves the first vision-based zero-shot transfer of impedance regulation: a lightweight visual policy network is trained in simulation using privileged contact information to generate stiffness labels, then directly deployed on real-world images without fine-tuning. User studies demonstrate that the method matches the safety of constant low-stiffness control while attaining the task efficiency of constant high-stiffness control.
📝 Abstract
In teleoperation of contact-rich manipulation tasks, selecting robot impedance is critical but difficult. The robot must be compliant to avoid damaging the environment, but stiff to remain responsive and to apply force when needed. In this paper, we present Stiffness Copilot, a vision-based policy for shared-control teleoperation in which the operator commands robot pose and the policy adjusts robot impedance online. To train Stiffness Copilot, we first infer direction-dependent stiffness matrices in simulation using privileged contact information. We then use these matrices to supervise a lightweight vision policy that predicts robot stiffness from wrist-camera images and transfers zero-shot to real images at runtime. In a human-subject study, Stiffness Copilot achieved safety comparable to using a constant low stiffness while matching the efficiency of using a constant high stiffness.