Iterative Event-based Motion Segmentation by Variational Contrast Maximization

📅 2025-04-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Fine-grained segmentation of multiple moving objects from event camera data remains challenging due to motion blur, low texture, and overlapping trajectories. Method: This paper proposes a variational iterative motion segmentation framework. Its core innovation is the first extension of the Contrast Maximization paradigm into a variational iterative structure, where motion-hypothesis-driven foreground residual modeling and joint optimization over event streams enable coupled motion compensation and edge enhancement. Contribution/Results: The method achieves heightened joint sensitivity to motion parameters and event inputs while maintaining robustness under complex noise. Evaluated on both public and in-house datasets, it improves segmentation accuracy by over 30% relative to state-of-the-art methods. It generates high-fidelity, edge-sharp motion-compensated images, enabling downstream motion object detection to achieve new state-of-the-art performance.

Technology Category

Application Category

📝 Abstract
Event cameras provide rich signals that are suitable for motion estimation since they respond to changes in the scene. As any visual changes in the scene produce event data, it is paramount to classify the data into different motions (i.e., motion segmentation), which is useful for various tasks such as object detection and visual servoing. We propose an iterative motion segmentation method, by classifying events into background (e.g., dominant motion hypothesis) and foreground (independent motion residuals), thus extending the Contrast Maximization framework. Experimental results demonstrate that the proposed method successfully classifies event clusters both for public and self-recorded datasets, producing sharp, motion-compensated edge-like images. The proposed method achieves state-of-the-art accuracy on moving object detection benchmarks with an improvement of over 30%, and demonstrates its possibility of applying to more complex and noisy real-world scenes. We hope this work broadens the sensitivity of Contrast Maximization with respect to both motion parameters and input events, thus contributing to theoretical advancements in event-based motion segmentation estimation. https://github.com/aoki-media-lab/event_based_segmentation_vcmax
Problem

Research questions and friction points this paper is trying to address.

Classify event data into different motions for segmentation
Extend Contrast Maximization for background and foreground separation
Improve moving object detection accuracy in noisy scenes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Iterative motion segmentation via variational contrast maximization
Classifies events into background and foreground motions
Improves moving object detection accuracy by 30%
🔎 Similar Papers
No similar papers found.
R
Ryo Yamaki
Keio University, Japan
Shintaro Shiba
Shintaro Shiba
Woven by Toyota, Keio University, TU Berlin
Event-based VisionComputer visionMachine learningNeuroscience
G
Guillermo Gallego
Technische Universität Berlin; Einstein Center Digital Future, Robotics Institute Germany, and Science of Intelligence Excellence Cluster, Germany.
Yoshimitsu Aoki
Yoshimitsu Aoki
慶應義塾大学
コンピュータビジョン,パターン認識