🤖 AI Summary
Addressing the strong coupling between motion estimation and noise removal in event cameras—which hinders joint modeling—this paper proposes the first motion-noise joint estimation framework. Our method unifies optical flow or ego-motion estimation with noise modeling under a differentiable contrast maximization objective, enabling plug-and-play motion estimation modules (e.g., classical algorithms or deep networks) while explicitly capturing motion-dependent event statistics. A unified loss function jointly optimizes reconstruction fidelity, motion accuracy, and denoising performance. Evaluated on the E-MLB denoising benchmark, our approach achieves state-of-the-art results; it also delivers superior performance on DND21. Moreover, it significantly improves both motion estimation accuracy and intensity image reconstruction quality, overcoming the limitations of conventional sequential processing pipelines.
📝 Abstract
Event cameras are emerging vision sensors, whose noise is challenging to characterize. Existing denoising methods for event cameras consider other tasks such as motion estimation separately (i.e., sequentially after denoising). However, motion is an intrinsic part of event data, since scene edges cannot be sensed without motion. This work proposes, to the best of our knowledge, the first method that simultaneously estimates motion in its various forms (e.g., ego-motion, optical flow) and noise. The method is flexible, as it allows replacing the 1-step motion estimation of the widely-used Contrast Maximization framework with any other motion estimator, such as deep neural networks. The experiments show that the proposed method achieves state-of-the-art results on the E-MLB denoising benchmark and competitive results on the DND21 benchmark, while showing its efficacy on motion estimation and intensity reconstruction tasks. We believe that the proposed approach contributes to strengthening the theory of event-data denoising, as well as impacting practical denoising use-cases, as we release the code upon acceptance. Project page: https://github.com/tub-rip/ESMD