๐ค AI Summary
Real-time detection of moving doorframes and energy-efficient obstacle-avoidance navigation for resource-constrained micro-drones in dynamic environments remains challenging. Method: We propose an event-driven, physics-guided neuromorphic framework integrating a dynamic vision sensor (DVS) with an unsupervised spiking neural network (SNN) for millisecond-scale, low-power target detection; and a lightweight physics-guided neural network (PgNN) that embeds rigid-body dynamics priors into an end-to-end vision-to-planning closed loopโensuring energy optimality, robustness, and symbolic interpretability. Results: Evaluated in Gazebo/ROS simulation, the system achieves real-time moving-doorframe detection and near-minimum-energy path planning, reducing inference latency by 62% and power consumption by 58%. This work is the first to synergistically integrate unsupervised event-based detection with physics-guided neural decision-making, empirically validating the feasibility and superiority of coupling event cameras with physics-informed AI for autonomous navigation of micro-drones.
๐ Abstract
Vision-based object tracking is a critical component for achieving autonomous aerial navigation, particularly for obstacle avoidance. Neuromorphic Dynamic Vision Sensors (DVS) or event cameras, inspired by biological vision, offer a promising alternative to conventional frame-based cameras. These cameras can detect changes in intensity asynchronously, even in challenging lighting conditions, with a high dynamic range and resistance to motion blur. Spiking neural networks (SNNs) are increasingly used to process these event-based signals efficiently and asynchronously. Meanwhile, physics-based artificial intelligence (AI) provides a means to incorporate system-level knowledge into neural networks via physical modeling. This enhances robustness, energy efficiency, and provides symbolic explainability. In this work, we present a neuromorphic navigation framework for autonomous drone navigation. The focus is on detecting and navigating through moving gates while avoiding collisions. We use event cameras for detecting moving objects through a shallow SNN architecture in an unsupervised manner. This is combined with a lightweight energy-aware physics-guided neural network (PgNN) trained with depth inputs to predict optimal flight times, generating near-minimum energy paths. The system is implemented in the Gazebo simulator and integrates a sensor-fused vision-to-planning neuro-symbolic framework built with the Robot Operating System (ROS) middleware. This work highlights the future potential of integrating event-based vision with physics-guided planning for energy-efficient autonomous navigation, particularly for low-latency decision-making.