Visual Grounding from Event Cameras

📅 2025-09-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the multimodal perception gap between event cameras and natural language understanding by proposing the novel task of language-guided event-based visual grounding. Methodologically, we introduce Talk2Event—the first large-scale benchmark for this task—comprising 5,567 real-world driving-scene event sequences, 13,458 object annotations, and over 30,000 referring expressions. We innovatively formulate a four-dimensional structured attribute description (spatial, temporal, semantic, and relational) to enable interpretable, compositional cross-modal alignment. Our model jointly encodes spatiotemporal event representations and linguistic semantics, advancing beyond static object recognition toward contextual reasoning. Empirically, Talk2Event establishes a foundational resource and evaluation framework, enabling systematic study of dynamic vision-language grounding. This work provides both a new paradigm and essential data infrastructure for applications including autonomous robot navigation and human-robot interaction in dynamic environments.

Technology Category

Application Category

📝 Abstract
Event cameras capture changes in brightness with microsecond precision and remain reliable under motion blur and challenging illumination, offering clear advantages for modeling highly dynamic scenes. Yet, their integration with natural language understanding has received little attention, leaving a gap in multimodal perception. To address this, we introduce Talk2Event, the first large-scale benchmark for language-driven object grounding using event data. Built on real-world driving scenarios, Talk2Event comprises 5,567 scenes, 13,458 annotated objects, and more than 30,000 carefully validated referring expressions. Each expression is enriched with four structured attributes -- appearance, status, relation to the viewer, and relation to surrounding objects -- that explicitly capture spatial, temporal, and relational cues. This attribute-centric design supports interpretable and compositional grounding, enabling analysis that moves beyond simple object recognition to contextual reasoning in dynamic environments. We envision Talk2Event as a foundation for advancing multimodal and temporally-aware perception, with applications spanning robotics, human-AI interaction, and so on.
Problem

Research questions and friction points this paper is trying to address.

Language-driven object grounding using event camera data
Addressing multimodal perception gap between event cameras and language
Enabling contextual reasoning in dynamic environments beyond object recognition
Innovation

Methods, ideas, or system contributions that make the work stand out.

Event-based language-driven object grounding benchmark
Structured attribute-enriched contextual reasoning framework
Real-world driving scenario multimodal perception dataset
🔎 Similar Papers
No similar papers found.