🤖 AI Summary
This work addresses the challenging problem of single-handed knotting of deformable linear objects (DLOs), such as ropes or cables. We propose a non-learning, demonstration-free, and highly generalizable method with full interpretability. Our approach decouples visual perception from motion control: it employs vision-guided piecewise linear curve modeling to represent rope geometry; autonomously computes grasp poses and intermediate path points based on geometric constraints; and generates goal-directed motion policies without any training or prior data. The method exhibits robustness to variations in initial rope configuration and partial occlusions. Evaluated on 16 previously unseen configurations, it achieves a 50% single-handed knotting success rate. By eliminating reliance on learning or human demonstrations, our framework significantly enhances interpretability, adaptability, and deployment efficiency for DLO manipulation tasks.
📝 Abstract
This work presents KnotDLO, a method for one-handed Deformable Linear Object (DLO) knot tying that is robust to occlusion, repeatable for varying rope initial configurations, interpretable for generating motion policies, and requires no human demonstrations or training. Grasp and target waypoints for future DLO states are planned from the current DLO shape. Grasp poses are computed from indexing the tracked piecewise linear curve representing the DLO state based on the current curve shape and are piecewise continuous. KnotDLO computes intermediate waypoints from the geometry of the current DLO state and the desired next state. The system decouples visual reasoning from control. In 16 trials of knot tying, KnotDLO achieves a 50% success rate in tying an overhand knot from previously unseen configurations.