🤖 AI Summary
This work addresses the significant performance degradation of multimodal large language models under extreme lighting conditions, where RGB images suffer from severe structural and semantic deterioration. To mitigate this issue, the authors propose a dynamic fusion framework that integrates event streams with RGB frames, featuring a learnable illumination indicator to adaptively modulate the fusion process. Additionally, an illumination correction loss is introduced to align semantic representations with those under normal lighting. The study also presents the first multi-illumination event-instruction dataset. Extensive experiments demonstrate that the proposed method substantially outperforms existing general-purpose, illumination-adaptive, and pure event-based approaches across reasoning, counting, and fine-grained recognition tasks under extreme lighting, establishing a new state-of-the-art.
📝 Abstract
Multimodal Large Language Models (MLLMs) perform strong vision-language reasoning under standard conditions but fail in extreme illumination, where RGB inputs lose irrevocable structure and semantics. We propose Event-MLLM, an event-enhanced model that performs all-light visual reasoning by dynamically fusing event streams with RGB frames. Two key components drive our approach: an Illumination Indicator - a learnable signal derived from a DINOv2 branch that represents exposure degradation and adaptively modulates event-RGB fusion - and an Illumination Correction Loss that aligns fused features with non-degraded (normal-light) semantics in the latent space, compensating for information lost in extreme lighting. We curate the first multi-illumination event-instruction corpus for MLLMs, with 2,241 event-RGB samples (around 6 QA pairs each) across diverse scenes and 17 brightness rates (0.05x - 20x), plus an instruct-following benchmark for reasoning, counting, and fine-grained recognition under extreme lighting. Experiments show that Event-MLLM markedly outperforms general-purpose, illumination-adaptive, and event-only baselines, setting a new state of the art in robust multimodal perception and reasoning under challenging illumination.