🤖 AI Summary
This work addresses the low inference efficiency and power constraints in multi-hop MIMO networks for over-the-air machine learning. We propose an end-to-end differentiable over-the-air inference framework: fully connected neural network layers are mapped onto cascaded MIMO channels, and precoding matrices across hops are jointly optimized to enable distributed analog-domain inference. We introduce PrototypeNet—a novel architecture enforcing strict alignment between neuron count and antenna number—and design a customized loss function jointly penalizing classification error and latent vector power, augmented with channel noise injection during training to enhance robustness. Our approach circumvents bandwidth and energy bottlenecks inherent in conventional digital transmission, significantly improving inference accuracy under power constraints. Numerical experiments demonstrate that the multi-hop architecture consistently outperforms single-hop baselines at moderate-to-low SNRs, validating both effectiveness and practicality.
📝 Abstract
A novel over-the-air machine learning framework over multi-hop multiple-input and multiple-output (MIMO) networks is proposed. The core idea is to imitate fully connected (FC) neural network layers using multiple MIMO channels by carefully designing the precoding matrices at the transmitting nodes. A neural network dubbed PrototypeNet is employed consisting of multiple FC layers, with the number of neurons of each layer equal to the number of antennas of the corresponding terminal. To achieve satisfactory performance, we train PrototypeNet based on a customized loss function consisting of classification error and the power of latent vectors to satisfy transmit power constraints, with noise injection during training. Precoding matrices for each hop are then obtained by solving an optimization problem. We also propose a multiple-block extension when the number of antennas is limited. Numerical results verify that the proposed over-the-air transmission scheme can achieve satisfactory classification accuracy under a power constraint. The results also show that higher classification accuracy can be achieved with an increasing number of hops at a modest signal-to-noise ratio (SNR).