🤖 AI Summary
This work addresses the challenge of manipulating unknown articulated objects—such as drawers and refrigerator doors—where service robots often struggle due to a lack of prior knowledge about their kinematic structure. The authors propose an online estimation framework that integrates visually learned priors with real-time force and motion feedback during interaction. For the first time, deep learning–derived visual priors are fused with proprioceptive data within a screw-theory–based factor graph to enable real-time inference of articulation parameters and closed-loop control. Notably, the method requires no pre-existing object model and achieves a 75% success rate in autonomously opening previously unseen articulated objects in real-world robotic experiments. Its efficacy and robustness are further validated across both simulated and physical environments.
📝 Abstract
From refrigerators to kitchen drawers, humans interact with articulated objects effortlessly every day while completing household chores. For automating these tasks, service robots must be capable of manipulating arbitrary articulated objects. Recent deep learning methods have been shown to predict valuable priors on the affordance of articulated objects from vision. In contrast, many other works estimate object articulations by observing the articulation motion, but this requires the robot to already be capable of manipulating the object. In this article, we propose a novel approach combining these methods by using a factor graph for online estimation of articulation which fuses learned visual priors and proprioceptive sensing during interaction into an analytical model of articulation based on Screw Theory. With our method, a robotic system makes an initial prediction of articulation from vision before touching the object, and then quickly updates the estimate from kinematic and force sensing during manipulation. We evaluate our method extensively in both simulations and real-world robotic manipulation experiments. We demonstrate several closed-loop estimation and manipulation experiments in which the robot was capable of opening previously unseen drawers. In real hardware experiments, the robot achieved a 75% success rate for autonomous opening of unknown articulated objects.