🤖 AI Summary
Existing time-division multiplexing (TDM) communication algorithms support only pairwise node-to-node communication, failing to meet the demand for concurrent multi-peer communication from a single node in inter-satellite links. Method: This paper proposes a general-purpose TDM communication framework that breaks the conventional pairwise constraint, enabling synchronous TDM among an arbitrary number of peer nodes. The framework integrates a federated learning testbed architecture, distributed system design principles, and a dynamic timeslot scheduling mechanism, ensuring theoretical soundness and verifiability. We formalize the system model, design the protocol, and validate it experimentally. Contribution/Results: The framework significantly enhances the flexibility, scalability, and practical deployability of TDM communications in satellite networks. It establishes a novel paradigm for coordinated communication in large-scale satellite constellations, supporting heterogeneous and dynamic topologies while maintaining deterministic timing guarantees.
📝 Abstract
The original Python Testbed for Federated Learning Algorithms is a light FL framework, which provides the three generic algorithms: the centralized federated learning, the decentralized federated learning, and the TDM communication (i.e., peer data exchange) in the current time slot. The limitation of the latter is that it allows communication only between pairs of network nodes. This paper presents the new generic algorithm for the universal TDM communication that overcomes this limitation, such that a node can communicate with an arbitrary number of peers (assuming the peers also want to communicate with it). The paper covers: (i) the algorithm's theoretical foundation, (ii) the system design, and (iii) the system validation. The main advantage of the new algorithm is that it supports real-world TDM communications over inter satellite links.