🤖 AI Summary
This work addresses the lack of efficient, real-time simulation platforms for training and closed-loop evaluation of neuromorphic perception algorithms in complex dynamic environments. To this end, we present a high-performance real-time simulation library that integrates GPU-accelerated multirotor dynamics with multimodal sensor simulation—including event cameras, RGB, depth, and IMU—and enables low-latency, high-throughput tensor-level communication via a ZeroMQ-based Cortex messaging system compatible with NumPy and PyTorch. Our system pioneers the integration of neuromorphic simulation with temporally synchronized multimodal data streams, achieving simulation throughput of up to 2700 FPS. This significantly accelerates self-supervised learning and closed-loop control validation. The open-sourced code provides a robust foundation for rapid development and deployment of robotic perception and control algorithms.
📝 Abstract
Neurosim is a fast, real-time, high-performance library for simulating sensors such as dynamic vision sensors, RGB cameras, depth sensors, and inertial sensors. It can also simulate agile dynamics of multi-rotor vehicles in complex and dynamic environments. Neurosim can achieve frame rates as high as ~2700 FPS on a desktop GPU. Neurosim integrates with a ZeroMQ-based communication library called Cortex to facilitate seamless integration with machine learning and robotics workflows. Cortex provides a high-throughput, low-latency message-passing system for Python and C++ applications, with native support for NumPy arrays and PyTorch tensors. This paper discusses the design philosophy behind Neurosim and Cortex. It demonstrates how they can be used to (i) train neuromorphic perception and control algorithms, e.g., using self-supervised learning on time-synchronized multi-modal data, and (ii) test real-time implementations of these algorithms in closed-loop. Neurosim and Cortex are available at https://github.com/grasp-lyrl/neurosim .