Edged USLAM: Edge-Aware Event-Based SLAM with Learning-Based Depth Priors

📅 2026-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a lightweight visual-inertial SLAM system that fuses event cameras with IMU data to address the limitations of traditional visual SLAM under challenging conditions such as rapid motion, low illumination, or drastic lighting changes. The approach employs an edge-aware front-end to enhance sparse event streams for robust feature tracking and integrates a lightweight learning module to estimate coarse depth priors for regions of interest, which are leveraged for nonlinear motion compensation and scale recovery. While maintaining low computational overhead, the method significantly improves localization accuracy and stability in complex lighting scenarios, particularly along slow-moving or structured trajectories. Experimental results on public benchmarks and real-world drone datasets demonstrate superior performance over purely event-based or learning-based methods, achieving continuous, low-drift, and high-precision localization.

Technology Category

Application Category

📝 Abstract
Conventional visual simultaneous localization and mapping (SLAM) algorithms often fail under rapid motion, low illumination, or abrupt lighting transitions due to motion blur and limited dynamic range. Event cameras mitigate these issues with high temporal resolution and high dynamic range (HDR), but their sparse, asynchronous outputs complicate feature extraction and integration with other sensors; e.g. inertial measurement units (IMUs) and standard cameras. We present Edged USLAM, a hybrid visual-inertial system that extends Ultimate SLAM (USLAM) with an edge-aware front-end and a lightweight depth module. The frontend enhances event frames for robust feature tracking and nonlinear motion compensation, while the depth module provides coarse, region-of-interest (ROI)-based scene depth to improve motion compensation and scale consistency. Evaluations across public benchmarks and real-world unmanned air vehicle (UAV) flights demonstrate that performance varies significantly by scenario. For instance, event-only methods like point-line event-based visual-inertial odometry (PL-EVIO) or learning-based pipelines such as deep event-based visual odometry (DEVO) excel in highly aggressive or extreme HDR conditions. In contrast, Edged USLAM provides superior stability and minimal drift in slow or structured trajectories, ensuring consistently accurate localization on real flights under challenging illumination. These findings highlight the complementary strengths of event-only, learning-based, and hybrid approaches, while positioning Edged USLAM as a robust solution for diverse aerial navigation tasks.
Problem

Research questions and friction points this paper is trying to address.

visual SLAM
event camera
feature extraction
sensor fusion
low illumination
Innovation

Methods, ideas, or system contributions that make the work stand out.

event-based SLAM
edge-aware frontend
depth priors
visual-inertial odometry
hybrid SLAM
🔎 Similar Papers
No similar papers found.
Ş
Şebnem Sarıözkan
Automatic Control Group (RAT), Paderborn University, 33098 Paderborn, Germany
H
Hürkan Şahin
Automatic Control Group (RAT), Paderborn University, 33098 Paderborn, Germany
Olaya Álvarez-Tuñón
Olaya Álvarez-Tuñón
EIVA A/S, ITU Copenhaguen
SLAMdeep learningVisual Odometry
Erdal Kayacan
Erdal Kayacan
Full Professor at Paderborn University
Roboticscontrolunmanned systems