Evaluating Four FPGA-accelerated Space Use Cases based on Neural Network Algorithms for On-board Inference

📅 2026-03-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the critical challenge of limited onboard storage and downlink bandwidth in space missions, where high-fidelity sensor data volumes far exceed transmission capabilities, necessitating efficient on-orbit intelligent processing. For the first time, it systematically evaluates the acceleration performance of neural networks across four representative space applications on an AMD ZCU104 FPGA platform, deploying inference pipelines using both Vitis AI (DPU) and Vitis HLS, with an embedded ARM CPU as the baseline. To handle DPU-unsupported operators such as Sigmoid and 3D convolutions, custom hardware extensions are implemented via HLS. Experimental results demonstrate that Vitis AI achieves up to 34.16× speedup over the CPU, while the HLS-based approach yields a 5.4× improvement. With platform power consumption ranging from 1.5 to 6.75 W, energy per inference is significantly reduced across all scenarios, confirming the efficiency and feasibility of FPGA-based on-orbit intelligent processing.

Technology Category

Application Category

📝 Abstract
Space missions increasingly deploy high-fidelity sensors that produce data volumes exceeding onboard buffering and downlink capacity. This work evaluates FPGA acceleration of neural networks (NNs) across four space use cases on the AMD ZCU104 board. We use Vitis AI (AMD DPU) and Vitis HLS to implement inference, quantify throughput and energy, and expose toolchain and architectural constraints relevant to deployment. Vitis AI achieves up to 34.16$\times$ higher inference rate than the embedded ARM CPU baseline, while custom HLS designs reach up to 5.4$\times$ speedup and add support for operators (e.g., sigmoids, 3D layers) absent in the DPU. For these implementations, measured MPSoC inference power spans 1.5-6.75 W, reducing energy per inference versus CPU execution in all use cases. These results show that NN FPGA acceleration can enable onboard filtering, compression, and event detection, easing downlink pressure in future missions.
Problem

Research questions and friction points this paper is trying to address.

on-board inference
space missions
data downlink
FPGA acceleration
neural networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

FPGA acceleration
on-board inference
neural networks
Vitis AI
space missions
🔎 Similar Papers
No similar papers found.
Pedro Antunes
Pedro Antunes
LASIGE, Faculdade de Ciências, Universidade de Lisboa / University of Lisbon, Faculty of Sciences
Information SystemsDesign ScienceBusiness Process ManagementStorytellingBizDevOps
M
Muhammad Ihsan Al Hafiz
Department of Computer Science, Kungliga Tekniska högskolan (KTH), Stockholm, Sweden
Jonah Ekelund
Jonah Ekelund
KTH Royal Institute of Technology
Computational ScienceSpace PhysicsOrbital Dynamics
E
Ekaterina Dineva
Plasma-astrophysics, KU Leuven, Leuven, Belgium
G
George Miloshevich
Plasma-astrophysics, KU Leuven, Leuven, Belgium
P
Panagiotis Gonidakis
Plasma-astrophysics, KU Leuven, Leuven, Belgium
Artur Podobas
Artur Podobas
Associate Professor at KTH, Royal Institute of Technology
Parallel ComputingNeuromorphic ArchitecturesReconfigurable Systems