🤖 AI Summary
This work addresses natural teleoperation of kinematically redundant robotic arms. We propose a real-time imitation and generalization framework based on upper-limb human pose mapping. Methodologically, we introduce GRU-VAE for modeling the robot’s configuration manifold—integrating kinematic embedding with real-time decoding—to achieve end-to-end learning from human poses to high-dimensional robot configurations. Our key contributions are: (1) zero-shot generalization—generating kinematically feasible, diverse, and plausible robot configurations for unseen human poses during training; and (2) balanced real-time performance and dexterity, with low system latency suitable for dynamic human–robot interaction. Experiments demonstrate significant improvements in teleoperation naturalness, adaptability to novel tasks, and deployment robustness compared to prior approaches.
📝 Abstract
This paper presents a teleoperation system for controlling a redundant degree of freedom robot manipulator using human arm gestures. We propose a GRU-based Variational Autoencoder to learn a latent representation of the manipulator's configuration space, capturing its complex joint kinematics. A fully connected neural network maps human arm configurations into this latent space, allowing the system to mimic and generate corresponding manipulator trajectories in real time through the VAE decoder. The proposed method shows promising results in teleoperating the manipulator, enabling the generation of novel manipulator configurations from human features that were not present during training.