Nonlinear Motion-Guided and Spatio-Temporal Aware Network for Unsupervised Event-Based Optical Flow

๐Ÿ“… 2025-05-08
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing event-based optical flow estimation methods predominantly adopt frame-based modeling, neglecting the inherent asynchrony and spatiotemporal continuity of events, and assume linear inter-event motionโ€”leading to error accumulation over long sequences. To address this, we propose an unsupervised optical flow estimation framework tailored for long-duration event streams. First, we introduce a nonlinear motion compensation loss that explicitly models nonlinearity in inter-event motion. Second, we design a Spatiotemporal Memory Fusion Attention (STMFA) module to capture long-range spatiotemporal dependencies, coupled with an Adaptive Motion Feature Enhancement (AMFE) module to improve robustness of motion representation. Evaluated on MVSEC and DSEC-Flow under unsupervised settings, our method achieves state-of-the-art performance, significantly enhancing both accuracy and stability of optical flow estimation for extended event sequences.

Technology Category

Application Category

๐Ÿ“ Abstract
Event cameras have the potential to capture continuous motion information over time and space, making them well-suited for optical flow estimation. However, most existing learning-based methods for event-based optical flow adopt frame-based techniques, ignoring the spatio-temporal characteristics of events. Additionally, these methods assume linear motion between consecutive events within the loss time window, which increases optical flow errors in long-time sequences. In this work, we observe that rich spatio-temporal information and accurate nonlinear motion between events are crucial for event-based optical flow estimation. Therefore, we propose E-NMSTFlow, a novel unsupervised event-based optical flow network focusing on long-time sequences. We propose a Spatio-Temporal Motion Feature Aware (STMFA) module and an Adaptive Motion Feature Enhancement (AMFE) module, both of which utilize rich spatio-temporal information to learn spatio-temporal data associations. Meanwhile, we propose a nonlinear motion compensation loss that utilizes the accurate nonlinear motion between events to improve the unsupervised learning of our network. Extensive experiments demonstrate the effectiveness and superiority of our method. Remarkably, our method ranks first among unsupervised learning methods on the MVSEC and DSEC-Flow datasets. Our project page is available at https://wynelio.github.io/E-NMSTFlow.
Problem

Research questions and friction points this paper is trying to address.

Estimating optical flow from event cameras using spatio-temporal information
Addressing linear motion assumption errors in long-time event sequences
Improving unsupervised learning with nonlinear motion compensation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unsupervised network for event-based optical flow
Spatio-temporal aware modules for motion feature learning
Nonlinear motion compensation loss for accuracy
๐Ÿ”Ž Similar Papers
No similar papers found.
Z
Zuntao Liu
Faculty of Robot Science and Engineering, Northeastern University, Shenyang, China
Z
Zhuang Hao
Faculty of Robot Science and Engineering, Northeastern University, Shenyang, China
Junjie Jiang
Junjie Jiang
Northeastern University(Shenyang, China)
Robot NavigationDeep Reinforcement LearningSpiking Neural NetworksEvent Cameras
Y
Yuhang Song
Faculty of Robot Science and Engineering, Northeastern University, Shenyang, China
Z
Zheng Fang
Faculty of Robot Science and Engineering, Northeastern University, Shenyang, China