🤖 AI Summary
This work addresses the limited parallelizability of LSTM autoencoders due to their inherent sequential dependencies, which hinders their performance in real-time anomaly detection. The authors propose an FPGA-based accelerator implemented on a Zynq UltraScale+ MPSoC that leverages a dataflow architecture to enable, for the first time, cross-timestep parallel processing of multi-layer LSTM autoencoders, effectively exploiting temporal parallelism. This design significantly enhances model depth scalability and energy efficiency, achieving up to 79.6× and 18.2× lower latency compared to CPU and GPU baselines, respectively, while reducing energy consumption per timestep by factors of 1722× and 59.3× against the same platforms.
📝 Abstract
Recurrent Neural Networks (RNNs) are vital for sequential data processing. Long Short-Term Memory Autoencoders (LSTM-AEs) are particularly effective for unsupervised anomaly detection in time-series data. However, inherent sequential dependencies limit parallel computation. While previous work has explored FPGA-based acceleration for LSTM networks, efforts have typically focused on optimizing a single LSTM layer at a time. We introduce a novel FPGA-based accelerator using a dataflow architecture that exploits temporal parallelism for concurrent multi-layer processing of different timesteps within sequences. Experimental evaluations on four representative LSTM-AE models with varying widths and depths, implemented on a Zynq UltraScale+ MPSoC FPGA, demonstrate significant advantages over CPU (Intel Xeon Gold 5218R) and GPU (NVIDIA V100) implementations. Our accelerator achieves latency speedups up to 79.6x vs. CPU and 18.2x vs. GPU, alongside energy-per-timestep reductions of up to 1722x vs. CPU and 59.3x vs. GPU. These results, including superior network depth scalability, highlight our approach's potential for high-performance, real-time, power-efficient LSTM-AE-based anomaly detection on FPGAs.