🤖 AI Summary
Conventional autofocus methods suffer from latency and inefficiency in extreme scenarios—such as high-speed motion or low-light conditions—due to their reliance on iterative focus search across multiple frames.
Method: This paper proposes the first single-step, event-driven autofocus method, eliminating iterative focus searching entirely. Its core innovation is the Event-based Laplacian Product (ELP), a novel focus metric that reformulates autofocus as a focal-state detection task on a single event stream. We further design an end-to-end, event-camera–specific pipeline (optimized for DAVIS346 and EVK4) that jointly encodes event streams and grayscale Laplacian features.
Results: Experiments demonstrate up to 67% reduction in autofocus latency, with focal error reduced by 24× (DAVIS346) and 22× (EVK4). The method significantly enhances real-time performance and focusing accuracy under high-dynamic-range and low-illumination conditions.
📝 Abstract
High-speed autofocus in extreme scenes remains a significant challenge. Traditional methods rely on repeated sampling around the focus position, resulting in ``focus hunting''. Event-driven methods have advanced focusing speed and improved performance in low-light conditions; however, current approaches still require at least one lengthy round of ``focus hunting'', involving the collection of a complete focus stack. We introduce the Event Laplacian Product (ELP) focus detection function, which combines event data with grayscale Laplacian information, redefining focus search as a detection task. This innovation enables the first one-step event-driven autofocus, cutting focusing time by up to two-thirds and reducing focusing error by 24 times on the DAVIS346 dataset and 22 times on the EVK4 dataset. Additionally, we present an autofocus pipeline tailored for event-only cameras, achieving accurate results across a range of challenging motion and lighting conditions. All datasets and code will be made publicly available.