🤖 AI Summary
Existing event camera calibration methods rely on artificial calibration targets (e.g., checkerboards or flashing patterns) or reconstructed intensity images, resulting in high manual effort, long calibration times, and poor adaptability to dynamic scenes. To address these limitations, this paper proposes a novel intrinsic calibration method based solely on geometric line features—requiring no calibration board, flashing pattern, or image reconstruction. The approach directly detects straight-line structures (e.g., door/window edges) from raw event streams in man-made environments, constructs an event-to-line geometric model, and performs robust initial estimation followed by end-to-end nonlinear least-squares optimization. It supports both planar and non-planar lines and is compatible with monocular and stereo event cameras. Extensive evaluation on synthetic and real-world datasets demonstrates significant improvements in calibration efficiency and scene adaptability, achieving state-of-the-art accuracy. The source code is publicly available.
📝 Abstract
Camera calibration is an essential prerequisite for event-based vision applications. Current event camera calibration methods typically involve using flashing patterns, reconstructing intensity images, and utilizing the features extracted from events. Existing methods are generally time-consuming and require manually placed calibration objects, which cannot meet the needs of rapidly changing scenarios. In this paper, we propose a line-based event camera calibration framework exploiting the geometric lines of commonly-encountered objects in man-made environments, e.g., doors, windows, boxes, etc. Different from previous methods, our method detects lines directly from event streams and leverages an event-line calibration model to generate the initial guess of camera parameters, which is suitable for both planar and non-planar lines. Then, a non-linear optimization is adopted to refine camera parameters. Both simulation and real-world experiments have demonstrated the feasibility and accuracy of our method, with validation performed on monocular and stereo event cameras. The source code is released at https://github.com/Zibin6/line_based_event_camera_calib.