๐ค AI Summary
Event cameras produce sparse, high-temporal-resolution event streams that are poorly modeled by existing frame-based or point-cloud approaches: the former compromises temporal fidelity and incurs redundant computation, while the latter suffers from limited performance due to neglecting both explicit and implicit temporal dynamics. To address this, we propose EventMambaโthe first framework to integrate state-space models (SSMs), specifically Mamba, into event point cloud processing. It comprises a hierarchical point cloud network and a redesigned global temporal aggregation module that explicitly encodes event timestamps and implicitly captures long-range temporal dependencies. Crucially, EventMamba operates directly on raw event clouds, eliminating frame-based discretization and enabling native spatiotemporal modeling. Evaluated on six action recognition benchmarks, it achieves state-of-the-art performance among point-cloud methods. Moreover, it consistently outperforms frame-based approaches on pose relocalization and eye-movement regression tasks, while significantly reducing computational overhead.
๐ Abstract
Event cameras draw inspiration from biological systems, boasting low latency and high dynamic range while consuming minimal power. The most current approach to processing Event Cloud often involves converting it into frame-based representations, which neglects the sparsity of events, loses fine-grained temporal information, and increases the computational burden. In contrast, Point Cloud is a popular representation for processing 3-dimensional data and serves as an alternative method to exploit local and global spatial features. Nevertheless, previous point-based methods show an unsatisfactory performance compared to the frame-based method in dealing with spatio-temporal event streams. In order to bridge the gap, we propose EventMamba, an efficient and effective framework based on Point Cloud representation by rethinking the distinction between Event Cloud and Point Cloud, emphasizing vital temporal information. The Event Cloud is subsequently fed into a hierarchical structure with staged modules to process both implicit and explicit temporal features. Specifically, we redesign the global extractor to enhance explicit temporal extraction among a long sequence of events with temporal aggregation and State Space Model (SSM) based Mamba. Our model consumes minimal computational resources in the experiments and still exhibits SOTA point-based performance on six different scales of action recognition datasets. It even outperformed all frame-based methods on both Camera Pose Relocalization (CPR) and eye-tracking regression tasks. Our code is available at: https://github.com/rhwxmx/EventMamba.