Human-Robot Navigation using Event-based Cameras and Reinforcement Learning

📅 2025-06-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional frame-based robotic navigation suffers from motion blur and high latency due to fixed-frame-rate sampling. To address this, we propose a real-time, human-centered navigation framework that fuses asynchronous event-camera sensing with multimodal perception (event streams + distance sensors). Methodologically, we adopt a two-stage reinforcement learning strategy: first pretraining the policy network via imitation learning, then refining it end-to-end using Deep Deterministic Policy Gradient (DDPG). This work is the first to integrate the event camera’s ultra-low latency and high dynamic range with a staged policy learning paradigm, significantly improving response speed, environmental adaptability, and sample efficiency. Simulation results demonstrate robust real-time pedestrian following and dynamic obstacle avoidance, validating the framework’s effectiveness and generalizability in highly agile scenarios.

Technology Category

Application Category

📝 Abstract
This work introduces a robot navigation controller that combines event cameras and other sensors with reinforcement learning to enable real-time human-centered navigation and obstacle avoidance. Unlike conventional image-based controllers, which operate at fixed rates and suffer from motion blur and latency, this approach leverages the asynchronous nature of event cameras to process visual information over flexible time intervals, enabling adaptive inference and control. The framework integrates event-based perception, additional range sensing, and policy optimization via Deep Deterministic Policy Gradient, with an initial imitation learning phase to improve sample efficiency. Promising results are achieved in simulated environments, demonstrating robust navigation, pedestrian following, and obstacle avoidance. A demo video is available at the project website.
Problem

Research questions and friction points this paper is trying to address.

Develop real-time human-centered robot navigation using event cameras
Overcome motion blur and latency in conventional image-based controllers
Integrate event-based perception and reinforcement learning for obstacle avoidance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines event cameras with reinforcement learning
Uses asynchronous event data for adaptive control
Integrates imitation learning for efficiency
🔎 Similar Papers
No similar papers found.
I
Ignacio Bugueno-Cordova
Department of Electrical Engineering, Universidad de Chile; Institute of Engineering Sciences, Universidad de O’Higgins
J
J. Ruiz-del-Solar
Department of Electrical Engineering, Universidad de Chile; Advanced Mining Technology Center (AMTC), Universidad de Chile
Rodrigo Verschae
Rodrigo Verschae
Universidad de O'Higgins (UOH)
Computer VisionMachine LearningVisual & Spatial IARoboticsPrecision Agriculture