🤖 AI Summary
This work addresses the significant performance degradation of conventional vision-language-action (VLA) models in visually degraded conditions—such as extreme low light and motion blur—where reliance on standard frame-based images fails. To overcome this limitation, the study introduces the first systematic integration of event camera data into the VLA framework, proposing lightweight, pretraining-compatible event fusion strategies: event accumulation map overlay and an event adapter. These methods enhance semantic perception and action consistency without requiring image reconstruction. Evaluated on a newly built synchronized RGB-event-action teleoperation platform, the approach boosts Pick-Place task success rates from 0% to 60% (overlay) and up to 90% (adapter) under 20-lux illumination. Under severe 1000ms motion blur, it achieves 20–25% success in Pick-Place and 32.5% in Sorting tasks, substantially improving robotic robustness in adverse environments.
📝 Abstract
Robotic Vision-Language-Action (VLA) models generalize well for open-ended manipulation, but their perception is fragile under sensing-stage degradations such as extreme low light, motion blur, and black clipping. We present E-VLA, an event-augmented VLA framework that improves manipulation robustness when conventional frame-based vision becomes unreliable. Instead of reconstructing images from events, E-VLA directly leverages motion and structural cues in event streams to preserve semantic perception and perception-action consistency under adverse conditions. We build an open-source teleoperation platform with a DAVIS346 event camera and collect a real-world synchronized RGB-event-action manipulation dataset across diverse tasks and illumination settings. We also propose lightweight, pretrained-compatible event integration strategies and study event windowing and fusion for stable deployment. Experiments show that even a simple parameter-free fusion, i.e., overlaying accumulated event maps onto RGB images, could substantially improve robustness in dark and blur-heavy scenes: on Pick-Place at 20 lux, success increases from 0% (image-only) to 60% with overlay fusion and to 90% with our event adapter; under severe motion blur (1000 ms exposure), Pick-Place improves from 0% to 20-25%, and Sorting from 5% to 32.5%. Overall, E-VLA provides systematic evidence that event-driven perception can be effectively integrated into VLA models, pointing toward robust embodied intelligence beyond conventional frame-based imaging. Code and dataset will be available at https://github.com/JJayzee/E-VLA.