🤖 AI Summary
This work addresses the challenge of enabling robots to manipulate deformable ropes and satisfy complex topological constraints without demonstration supervision. To this end, the authors propose a hierarchical reinforcement learning framework that eschews single-step inverse models reliant on supervised signals in favor of a multi-step policy grounded in abstract topological actions. The knot-tying task is decomposed into coordinated subtasks, executed through layered collaboration among multiple agents. Experimental results demonstrate that the proposed approach significantly improves success rates and generalization across challenging knot configurations—including Figure-8 and Overhand knots—while substantially reducing planning time, thereby establishing a new state-of-the-art in demonstration-free robotic knot tying.
📝 Abstract
Robotic knot-tying represents a fundamental challenge in robotics due to the complex interactions between deformable objects and strict topological constraints. We present TWISTED-RL, a framework that improves upon the previous state-of-the-art in demonstration-free knot-tying (TWISTED), which smartly decomposed a single knot-tying problem into manageable subproblems, each addressed by a specialized agent. Our approach replaces TWISTED's single-step inverse model that was learned via supervised learning with a multi-step Reinforcement Learning policy conditioned on abstract topological actions rather than goal states. This change allows more delicate topological state transitions while avoiding costly and ineffective data collection protocols, thus enabling better generalization across diverse knot configurations. Experimental results demonstrate that TWISTED-RL manages to solve previously unattainable knots of higher complexity, including commonly used knots such as the Figure-8 and the Overhand. Furthermore, the increase in success rates and drop in planning time establishes TWISTED-RL as the new state-of-the-art in robotic knot-tying without human demonstrations.