🤖 AI Summary
To address the trade-off between accuracy and efficiency in radar-based human activity recognition (HAR) for edge deployment—particularly in resource-constrained scenarios such as aircraft marshalling—this work pioneers the application of spiking neural networks (SNNs) to radar gesture recognition. We propose a hybrid CNN-SNN architecture: a convolutional neural network (CNN) front-end extracts spatial features from radar spectrograms, while a back-end of leaky integrate-and-fire (LIF) neurons models the temporal dynamics of radar signals, enabling end-to-end learning. The architecture reduces parameter count by 88% with negligible accuracy degradation (<1%), achieving high classification accuracy on both the aircraft marshalling gesture dataset and the Google Soli dataset. It significantly lowers inference latency and energy consumption, and demonstrates strong cross-dataset generalization. These results validate SNNs as an effective and competitive solution for low-power, edge-deployable radar HAR systems.
📝 Abstract
Radar-based Human Activity Recognition (HAR) offers privacy and robustness over camera-based methods, yet remains computationally demanding for edge deployment. We present the first use of Spiking Neural Networks (SNNs) for radar-based HAR on aircraft marshalling signal classification. Our novel hybrid architecture combines convolutional modules for spatial feature extraction with Leaky Integrate-and-Fire (LIF) neurons for temporal processing, inherently capturing gesture dynamics. The model reduces trainable parameters by 88% with under 1% accuracy loss compared to baselines, and generalizes well to the Soli gesture dataset. Through systematic comparisons with Artificial Neural Networks, we demonstrate the trade-offs of spiking computation in terms of accuracy, latency, memory, and energy, establishing SNNs as an efficient and competitive solution for radar-based HAR.