🤖 AI Summary
This work addresses the Air-FedEdge Learning (Air-FEEL) system integrating aerial sensing, wireless communication, and edge computing, focusing on accelerating model convergence under strong inter-coupling among these three functionalities. We formulate and quantify, for the first time, the joint impact of sensing noise and over-the-air computation (AirComp) distortion on convergence rate, under per-round latency and device energy constraints, revealing their intrinsic coupling in shared resource allocation. A low-complexity alternating optimization algorithm is proposed to jointly design batch size, CPU frequency, and AirComp transmit power. Theoretical analysis establishes a convergence bound. Experiments on human activity recognition demonstrate that the proposed method significantly improves both convergence speed and model accuracy, enabling efficient distributed training while strictly satisfying latency and energy budgets.
📝 Abstract
This paper studies an over-the-air federated edge learning (Air-FEEL) system with integrated sensing, communication, and computation (ISCC), in which one edge server coordinates multiple edge devices to wirelessly sense the objects and use the sensing data to collaboratively train a machine learning model for recognition tasks. In this system, over-the-air computation (AirComp) is employed to enable one-shot model aggregation from edge devices. Under this setup, we analyze the convergence behavior of the ISCC-enabled Air-FEEL in terms of the loss function degradation, by particularly taking into account the wireless sensing noise during the training data acquisition and the AirComp distortions during the over-the-air model aggregation. The result theoretically shows that sensing, communication, and computation compete for network resources to jointly decide the convergence rate. Based on the analysis, we design the ISCC parameters under the target of maximizing the loss function degradation while ensuring the latency and energy budgets in each round. The challenge lies on the tightly coupled processes of sensing, communication, and computation among different devices. To tackle the challenge, we derive a low-complexity ISCC algorithm by alternately optimizing the batch size control and the network resource allocation. It is found that for each device, less sensing power should be consumed if a larger batch of data samples is obtained and vice versa. Besides, with a given batch size, the optimal computation speed of one device is the minimum one that satisfies the latency constraint. Numerical results based on a human motion recognition task verify the theoretical convergence analysis and show that the proposed ISCC algorithm well coordinates the batch size control and resource allocation among sensing, communication, and computation to enhance the learning performance.