🤖 AI Summary
This work addresses the challenge of modeling sparse, asynchronous event data by proposing E-TIDE, a lightweight, end-to-end trainable architecture designed for efficient future motion prediction to support downstream tasks such as semantic segmentation and object tracking. The method introduces the TIDE module, which integrates large-kernel spatiotemporal convolution with an activity-aware gating mechanism to effectively capture spatiotemporal dependencies in event streams without requiring large-scale pretraining, while maintaining low computational overhead. Experimental results demonstrate that E-TIDE achieves competitive prediction performance on standard event-based benchmarks, significantly reducing model size and training cost, thereby enabling real-time deployment in latency- and memory-constrained scenarios.
📝 Abstract
Event-based cameras capture visual information as asynchronous streams of per-pixel brightness changes, generating sparse, temporally precise data. Compared to conventional frame-based sensors, they offer significant advantages in capturing high-speed dynamics while consuming substantially less power. Predicting future event representations from past observations is an important problem, enabling downstream tasks such as future semantic segmentation or object tracking without requiring access to future sensor measurements. While recent state-of-the-art approaches achieve strong performance, they often rely on computationally heavy backbones and, in some cases, large-scale pretraining, limiting their applicability in resource-constrained scenarios. In this work, we introduce E-TIDE, a lightweight, end-to-end trainable architecture for event-tensor prediction that is designed to operate efficiently without large-scale pretraining. Our approach employs the TIDE module (Temporal Interaction for Dynamic Events), motivated by efficient spatiotemporal interaction design for sparse event tensors, to capture temporal dependencies via large-kernel mixing and activity-aware gating while maintaining low computational complexity. Experiments on standard event-based datasets demonstrate that our method achieves competitive performance with significantly reduced model size and training requirements, making it well-suited for real-time deployment under tight latency and memory budgets.