ARC-Calib: Autonomous Markerless Camera-to-Robot Calibration via Exploratory Robot Motions

📅 2025-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional camera-to-robot hand-eye calibration relies on人工 markers or pre-trained tracking models, hindering edge deployment and limiting generalizability. ARC-Calib introduces the first purely model-driven, fully autonomous, markerless calibration framework: a robot autonomously generates exploratory motions to produce trackable visual trajectories; geometric constraints—derived from coplanarity and collinearity priors—are formulated and solved via direct geometric optimization to estimate extrinsic parameters. The method requires no manual markers, pre-trained models, additional data collection, or platform-specific fine-tuning, enabling zero-shot cross-robot generalization. Evaluated in simulation and on real robotic arms, it achieves rotation errors <1.2° and translation errors <1.8 mm, satisfying real-time execution requirements on edge devices.

Technology Category

Application Category

📝 Abstract
Camera-to-robot (also known as eye-to-hand) calibration is a critical component of vision-based robot manipulation. Traditional marker-based methods often require human intervention for system setup. Furthermore, existing autonomous markerless calibration methods typically rely on pre-trained robot tracking models that impede their application on edge devices and require fine-tuning for novel robot embodiments. To address these limitations, this paper proposes a model-based markerless camera-to-robot calibration framework, ARC-Calib, that is fully autonomous and generalizable across diverse robots and scenarios without requiring extensive data collection or learning. First, exploratory robot motions are introduced to generate easily trackable trajectory-based visual patterns in the camera's image frames. Then, a geometric optimization framework is proposed to exploit the coplanarity and collinearity constraints from the observed motions to iteratively refine the estimated calibration result. Our approach eliminates the need for extra effort in either environmental marker setup or data collection and model training, rendering it highly adaptable across a wide range of real-world autonomous systems. Extensive experiments are conducted in both simulation and the real world to validate its robustness and generalizability.
Problem

Research questions and friction points this paper is trying to address.

Autonomous camera-to-robot calibration without markers
Eliminates need for pre-trained robot tracking models
Generalizable across diverse robots and scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Autonomous markerless calibration via exploratory robot motions
Geometric optimization using coplanarity and collinearity constraints
No need for markers, data collection, or model training
🔎 Similar Papers
No similar papers found.
P
Podshara Chanrungmaneekul
Department of Computer Science, Rice University, Houston, TX 77005, USA
Y
Yiting Chen
Department of Computer Science, Rice University, Houston, TX 77005, USA
J
Joshua T. Grace
Department of Mechanical Engineering and Material Science, Yale University, New Haven, CT 06511, USA
Aaron M. Dollar
Aaron M. Dollar
Professor of Mechanical Engineering & Materials Science and Computer Science, Yale University
RoboticsMechanismsBiomechanicsManipulationRehabilitation Robotics
Kaiyu Hang
Kaiyu Hang
Rice University
Robotic GraspingRobotic Manipulation