🤖 AI Summary
This work addresses the challenge of uncontrolled landing pose (position and orientation) in robotic dynamic tossing. We propose a precise toss-and-flip control method that decouples parasitic rotations by integrating impulse-momentum-based motion planning, free-flight dynamics modeling, and regression-based machine learning, augmented with a data assimilation mechanism to transfer grasping-rotation knowledge. This synergistic model-driven and data-driven framework significantly expands the achievable landing pose space and improves new-object learning efficiency by 70%. Experiments demonstrate that the method achieves ±5 cm positional accuracy and ±45° orientational accuracy within only dozens of trials; moreover, its sample complexity is reduced by 40% compared to end-to-end learning approaches.
📝 Abstract
Dynamic manipulation, such as robot tossing or throwing objects, has recently gained attention as a novel paradigm to speed up logistic operations. However, the focus has predominantly been on the object's landing location, irrespective of its final orientation. In this work, we present a method enabling a robot to accurately "throw-flip" objects to a desired landing pose (position and orientation). Conventionally, objects thrown by revolute robots suffer from parasitic rotation, resulting in highly restricted and uncontrollable landing poses. Our approach is based on two key design choices: first, leveraging the impulse-momentum principle, we design a family of throwing motions that effectively decouple the parasitic rotation, significantly expanding the feasible set of landing poses. Second, we combine a physics-based model of free flight with regression-based learning methods to account for unmodeled effects. Real robot experiments demonstrate that our framework can learn to throw-flip objects to a pose target within ($pm$5 cm, $pm$45 degrees) threshold in dozens of trials. Thanks to data assimilation, incorporating projectile dynamics reduces sample complexity by an average of 40% when throw-flipping to unseen poses compared to end-to-end learning methods. Additionally, we show that past knowledge on in-hand object spinning can be effectively reused, accelerating learning by 70% when throwing a new object with a Center of Mass (CoM) shift. A video summarizing the proposed method and the hardware experiments is available at https://youtu.be/txYc9b1oflU.