🤖 AI Summary
This study addresses energy efficiency optimization in federated learning (FL) systems coexisting with ALOHA-type random-access devices that share uplink bandwidth. The objective is to minimize total system energy consumption while satisfying FL latency and random-access throughput constraints. For the first time, the authors jointly model an FL system operating alongside ALOHA or slotted ALOHA protocols, employing FDMA for FL transmissions and formulating a constrained non-convex optimization problem. They propose a near-optimal, computationally efficient solution method. Experimental results demonstrate that under FL-dominated traffic, ALOHA reduces energy consumption by up to 48%, whereas in random-access-dominated scenarios, slotted ALOHA achieves a 6% energy saving, confirming the proposed framework’s energy efficiency gains and protocol adaptability across diverse traffic patterns.
📝 Abstract
Artificial intelligence-generated traffic is changing the shape of wireless networks. Specifically, as the amount of data generated to train machine learning models is massive, network resources must be carefully allocated to continue supporting standard applications. In this paper, we tackle the problem of allocating radio resources for two sets of concurrent devices communicating in uplink with a gateway over the same bandwidth. A set of devices performs federated learning (FL), and accesses the medium in FDMA, uploading periodically large models. The other set is throughput-oriented and accesses the medium via random access (RA), either with ALOHA or slotted-ALOHA protocols. We derive close-to-optimal solutions to the non-convex problem of minimizing the system energy consumption subject to FL latency and RA throughput constraints. Our solutions show that ALOHA can sustain high FL efficiency, yielding up to 48% lower consumption when the system is dominated by FL traffic. On the other hand, slotted-ALOHA becomes more efficient when RA traffic dominates, yielding 6% lower consumption.