Motion Segmentation and Egomotion Estimation from Event-Based Normal Flow

📅 2025-07-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing motion segmentation and ego-motion estimation methods for neuromorphic vision sensors typically rely on optical flow or explicit depth estimation, limiting their efficiency and robustness in dynamic, low-texture, or depth-ambiguous scenes. Method: This paper proposes a lightweight, joint optimization framework that leverages event-driven normal flow—implicitly encoding scene geometry—augmented by inertial measurements and geometric constraints, without requiring optical flow or dense depth. Contribution/Results: Key innovations include event-based superpixel segmentation, residual-driven iterative optimization, and hierarchical clustering integrating motion similarity and temporal consistency—significantly improving segmentation accuracy at object boundaries and translational ego-motion estimation. Evaluated on the EVIMO2v2 dataset, the method demonstrates robustness, real-time performance (≥1 kHz event-rate scalability), and temporal resolution fidelity under high-speed motion. It establishes a new paradigm for event-camera-enabled robotic navigation, offering efficient, geometry-aware perception with minimal computational overhead.

Technology Category

Application Category

📝 Abstract
This paper introduces a robust framework for motion segmentation and egomotion estimation using event-based normal flow, tailored specifically for neuromorphic vision sensors. In contrast to traditional methods that rely heavily on optical flow or explicit depth estimation, our approach exploits the sparse, high-temporal-resolution event data and incorporates geometric constraints between normal flow, scene structure, and inertial measurements. The proposed optimization-based pipeline iteratively performs event over-segmentation, isolates independently moving objects via residual analysis, and refines segmentations using hierarchical clustering informed by motion similarity and temporal consistency. Experimental results on the EVIMO2v2 dataset validate that our method achieves accurate segmentation and translational motion estimation without requiring full optical flow computation. This approach demonstrates significant advantages at object boundaries and offers considerable potential for scalable, real-time robotic and navigation applications.
Problem

Research questions and friction points this paper is trying to address.

Robust motion segmentation using event-based normal flow
Egomotion estimation without full optical flow computation
Handling sparse event data for real-time robotic applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses event-based normal flow for motion segmentation
Incorporates geometric constraints with inertial data
Optimization pipeline with hierarchical clustering refinement
🔎 Similar Papers
No similar papers found.