🤖 AI Summary
To address the challenge of high-precision visual alignment of small objects (e.g., screws) for humanoid robots, this paper proposes a dual-view closed-loop visual servoing framework. It fuses images from head and torso-mounted cameras with head joint angles, employing a Transformer architecture augmented with a distance estimation module and a multi-perception head to enable collaborative modeling of heterogeneous sensory features. The method achieves, for the first time in close-range scenarios, real-time pose estimation and servo control with sub-millimeter accuracy (0.8–1.3 mm), attaining success rates of 93%–100% on M4–M8 screw manipulation tasks—significantly outperforming conventional approaches. Key innovations include joint-state-embedded cross-view fusion of vision and proprioception, and a distance-sensitive visual servoing paradigm explicitly optimized for micro-manipulation.
📝 Abstract
High-precision tiny object alignment remains a common and critical challenge for humanoid robots in real-world. To address this problem, this paper proposes a vision-based framework for precisely estimating and controlling the relative position between a handheld tool and a target object for humanoid robots, e.g., a screwdriver tip and a screw head slot. By fusing images from the head and torso cameras on a robot with its head joint angles, the proposed Transformer-based visual servoing method can correct the handheld tool's positional errors effectively, especially at a close distance. Experiments on M4-M8 screws demonstrate an average convergence error of 0.8-1.3 mm and a success rate of 93%-100%. Through comparative analysis, the results validate that this capability of high-precision tiny object alignment is enabled by the Distance Estimation Transformer architecture and the Multi-Perception-Head mechanism proposed in this paper.