🤖 AI Summary
Quadrupedal robots face significant challenges in野外 high-dynamic obstacle traversal due to large latency and poor illumination robustness of conventional visual sensors (e.g., depth cameras), as well as the high computational cost and power consumption of deep neural networks. Method: This work pioneers the tight integration of bio-inspired event cameras with spiking neural networks (SNNs) to establish a low-latency, low-power, high-dynamic-range perception–decision closed-loop system. Our approach synergistically combines event-driven sensing, SNN-based sparse computation, reinforcement learning–based control, and neuromorphic hardware architecture. Results: In complex park environments executing the “our” task, the proposed system achieves only 11.7% of the energy consumption of an equivalent artificial neural network (ANN) baseline—reducing power usage by 88.3%. It markedly improves real-time performance and environmental adaptability, thereby overcoming critical deployment bottlenecks of traditional approaches in outdoor settings.
📝 Abstract
In recent years, quadruped robotics has advanced significantly, particularly in perception and motion control via reinforcement learning, enabling complex motions in challenging environments. Visual sensors like depth cameras enhance stability and robustness but face limitations, such as low operating frequencies relative to joint control and sensitivity to lighting, which hinder outdoor deployment. Additionally, deep neural networks in sensor and control systems increase computational demands. To address these issues, we introduce spiking neural networks (SNNs) and event cameras to perform a challenging quadruped parkour task. Event cameras capture dynamic visual data, while SNNs efficiently process spike sequences, mimicking biological perception. Experimental results demonstrate that this approach significantly outperforms traditional models, achieving excellent parkour performance with just 11.7% of the energy consumption of an artificial neural network (ANN)-based model, yielding an 88.3% energy reduction. By integrating event cameras with SNNs, our work advances robotic reinforcement learning and opens new possibilities for applications in demanding environments.