Prepare for Warp Speed: Sub-millisecond Visual Place Recognition Using Event Cameras

📅 2025-09-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing event-camera-based visual place recognition (VPR) methods require tens to hundreds of milliseconds of event data, failing to meet real-time localization demands. To address this, we propose the first sub-millisecond VPR system. Methodologically, we design an active-pixel-based binary feature encoding scheme and leverage bit-level operations for ultra-fast inter-frame similarity computation. We further introduce event activity normalization and a novel evaluation metric—Time to Correct Match (TCM)—to better quantify temporal efficiency. Evaluated on the QCR-Event-Dataset, our method achieves an 11.33× improvement in Recall@1; on the 8-km Brisbane-Event-VPR dataset, it yields a 5.92× gain. Crucially, our system achieves location matching in under 1 ms—the first such result—significantly reducing localization latency for robots operating in unknown environments.

Technology Category

Application Category

📝 Abstract
Visual Place Recognition (VPR) enables systems to identify previously visited locations within a map, a fundamental task for autonomous navigation. Prior works have developed VPR solutions using event cameras, which asynchronously measure per-pixel brightness changes with microsecond temporal resolution. However, these approaches rely on dense representations of the inherently sparse camera output and require tens to hundreds of milliseconds of event data to predict a place. Here, we break this paradigm with Flash, a lightweight VPR system that predicts places using sub-millisecond slices of event data. Our method is based on the observation that active pixel locations provide strong discriminative features for VPR. Flash encodes these active pixel locations using efficient binary frames and computes similarities via fast bitwise operations, which are then normalized based on the relative event activity in the query and reference frames. Flash improves Recall@1 for sub-millisecond VPR over existing baselines by 11.33x on the indoor QCR-Event-Dataset and 5.92x on the 8 km Brisbane-Event-VPR dataset. Moreover, our approach reduces the duration for which the robot must operate without awareness of its position, as evidenced by a localization latency metric we term Time to Correct Match (TCM). To the best of our knowledge, this is the first work to demonstrate sub-millisecond VPR using event cameras.
Problem

Research questions and friction points this paper is trying to address.

Achieving sub-millisecond visual place recognition using event cameras
Overcoming reliance on dense representations of sparse event data
Reducing localization latency for autonomous navigation systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses sub-millisecond event data slices
Encodes active pixels via binary frames
Computes similarities with fast bitwise operations
🔎 Similar Papers
No similar papers found.