🤖 AI Summary
Video-based imitation learning suffers significant performance degradation under visual domain shifts—such as variations in lighting and texture—between expert demonstration videos and the learner’s environment. To address this, we propose an event-inspired perception paradigm that converts RGB video into sparse, temporal intensity-gradient representations, thereby decoupling motion dynamics from static appearance at the representation level and eliminating reliance on domain-randomization via data augmentation. Inspired by biological transient vision mechanisms, we introduce the first adversarial imitation learning framework grounded in event-stream encoding, enabling zero-shot cross-domain generalization. Evaluated on the DeepMind Control Suite and the Adroit dexterous manipulation benchmark, our method improves cross-domain performance by 37% over domain-randomization baselines—without requiring environment-specific augmentations—and achieves, for the first time, representation-level motion–appearance disentanglement for visually robust imitation learning.
📝 Abstract
Imitation from videos often fails when expert demonstrations and learner environments exhibit domain shifts, such as discrepancies in lighting, color, or texture. While visual randomization partially addresses this problem by augmenting training data, it remains computationally intensive and inherently reactive, struggling with unseen scenarios. We propose a different approach: instead of randomizing appearances, we eliminate their influence entirely by rethinking the sensory representation itself. Inspired by biological vision systems that prioritize temporal transients (e.g., retinal ganglion cells) and by recent sensor advancements, we introduce event-inspired perception for visually robust imitation. Our method converts standard RGB videos into a sparse, event-based representation that encodes temporal intensity gradients, discarding static appearance features. This biologically grounded approach disentangles motion dynamics from visual style, enabling robust visual imitation from observations even in the presence of visual mismatches between expert and agent environments. By training policies on event streams, we achieve invariance to appearance-based distractors without requiring computationally expensive and environment-specific data augmentation techniques. Experiments across the DeepMind Control Suite and the Adroit platform for dynamic dexterous manipulation show the efficacy of our method. Our code is publicly available at Eb-LAIfO.