LiFR-Seg: Anytime High-Frame-Rate Segmentation via Event-Guided Propagation

πŸ“… 2026-03-22
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge of achieving high-frame-rate semantic segmentation at arbitrary time instants using low-frame-rate cameras in dynamic scenes, where perceptual gaps hinder continuous inference. To this end, the authors propose the Anytime Interframe Semantic Segmentation task, which leverages a single historical RGB frame and an asynchronous event stream to produce high-temporal-resolution segmentation outputs at any desired timestamp. The approach introduces three key innovations: an event-guided uncertainty-aware feature propagation mechanism, a temporal memory attention module, and a training strategy based on a newly constructed high-frequency synthetic dataset, SHF-DSEC. Evaluated on the DSEC benchmark, the method achieves a mean Intersection-over-Union (mIoU) of 73.82%, differing by only 0.09% from the high-frame-rate performance upper boundβ€”a gap that is not statistically significant.

Technology Category

Application Category

πŸ“ Abstract
Dense semantic segmentation in dynamic environments is fundamentally limited by the low-frame-rate (LFR) nature of standard cameras, which creates critical perceptual gaps between frames. To solve this, we introduce Anytime Interframe Semantic Segmentation: a new task for predicting segmentation at any arbitrary time using only a single past RGB frame and a stream of asynchronous event data. This task presents a core challenge: how to robustly propagate dense semantic features using a motion field derived from sparse and often noisy event data, all while mitigating feature degradation in highly dynamic scenes. We propose LiFR-Seg, a novel framework that directly addresses these challenges by propagating deep semantic features through time. The core of our method is an uncertainty-aware warping process, guided by an event-driven motion field and its learned, explicit confidence. A temporal memory attention module further ensures coherence in dynamic scenarios. We validate our method on the DSEC dataset and a new high-frequency synthetic benchmark (SHF-DSEC) we contribute. Remarkably, our LFR system achieves performance (73.82% mIoU on DSEC) that is statistically indistinguishable from an HFR upper-bound (within 0.09%) that has full access to the target frame. This work presents a new, efficient paradigm for achieving robust, high-frame-rate perception with low-frame-rate hardware. Project Page: https://candy-crusher.github.io/LiFR_Seg_Proj/#; Code: https://github.com/Candy-Crusher/LiFR-Seg.git.
Problem

Research questions and friction points this paper is trying to address.

semantic segmentation
low-frame-rate
event camera
high-frame-rate perception
dynamic scenes
Innovation

Methods, ideas, or system contributions that make the work stand out.

event-based vision
semantic segmentation
feature propagation
uncertainty-aware warping
anytime perception
πŸ”Ž Similar Papers
2024-03-05IEEE transactions on circuits and systems for video technology (Print)Citations: 0