EventFlash: Towards Efficient MLLMs for Event-Based Vision

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inefficiency of existing event-based multimodal large language models (MLLMs), which adopt dense image processing paradigms and neglect the inherent spatiotemporal sparsity of event streams, leading to computational redundancy and suboptimal inference speed. To overcome this limitation, we propose EventFlash, the first sparse MLLM architecture tailored for event vision. EventFlash leverages spatiotemporal token sparsification, adaptive temporal window aggregation, and density-guided sparse attention to efficiently model long event sequences. Integrated with a curriculum training strategy and EventMind—a large-scale, multi-granularity instruction dataset—EventFlash supports inputs up to 1,000 time bins, achieving a 12.4× higher inference throughput than the baseline EventFlash-Zero while maintaining comparable performance, thereby substantially surpassing prior methods like EventGPT that are restricted to only 5 time bins.

Technology Category

Application Category

📝 Abstract
Event-based multimodal large language models (MLLMs) enable robust perception in high-speed and low-light scenarios, addressing key limitations of frame-based MLLMs. However, current event-based MLLMs often rely on dense image-like processing paradigms, overlooking the spatiotemporal sparsity of event streams and resulting in high computational cost. In this paper, we propose EventFlash, a novel and efficient MLLM to explore spatiotemporal token sparsification for reducing data redundancy and accelerating inference. Technically, we build EventMind, a large-scale and scene-diverse dataset with over 500k instruction sets, providing both short and long event stream sequences to support our curriculum training strategy. We then present an adaptive temporal window aggregation module for efficient temporal sampling, which adaptively compresses temporal tokens while retaining key temporal cues. Finally, a sparse density-guided attention module is designed to improve spatial token efficiency by selecting informative regions and suppressing empty or sparse areas. Experimental results show that EventFlash achieves a $12.4\times$ throughput improvement over the baseline (EventFlash-Zero) while maintaining comparable performance. It supports long-range event stream processing with up to 1,000 bins, significantly outperforming the 5-bin limit of EventGPT. We believe EventFlash serves as an efficient foundation model for event-based vision.
Problem

Research questions and friction points this paper is trying to address.

event-based vision
multimodal large language models
spatiotemporal sparsity
computational efficiency
event streams
Innovation

Methods, ideas, or system contributions that make the work stand out.

event-based vision
spatiotemporal sparsification
multimodal large language models
adaptive temporal aggregation
sparse attention
🔎 Similar Papers
No similar papers found.
S
Shaoyu Liu
Xidian University
J
Jianing Li
Tsinghua University
G
Guanghui Zhao
Xidian University
Y
Yunjian Zhang
Tsinghua University
W
Wen Jiang
Beijing Institute of Technology
Ming Li
Ming Li
Senior Research Scientist, Guangming Lab
AIGCMLLMsEmbodied AI
X
Xiangyang Ji
Tsinghua University