🤖 AI Summary
While large-scale data transfers in reconfigurable networks have been extensively studied, indirect cooperative flow scheduling for small-scale requests—relative to single-round transmission capacity—has long been overlooked, leading to prolonged completion times and low resource utilization.
Method: This paper presents the first systematic modeling and optimization of cooperative flow scheduling under this scenario. We propose a combinatorial optimization framework integrating fractional matching and indirect routing: fractional matching enables fine-grained bandwidth allocation, while multi-hop indirect paths relax direct-connectivity constraints, supporting demand-driven elastic scheduling. Building upon theoretical schedulability analysis, we design an efficient heuristic algorithm.
Results: Experiments demonstrate that, in small-scale data transfer scenarios, our approach reduces average flow completion time by 32.7% and improves link resource utilization by 41.5% over state-of-the-art methods, significantly enhancing scheduling efficiency and network adaptability for lightweight traffic.
📝 Abstract
We consider routing in reconfigurable networks, which is also known as coflow scheduling in the literature. The algorithmic literature generally (perhaps implicitly) assumes that the amount of data to be transferred is large. Thus the standard way to model a collection of requested data transfers is by an integer demand matrix $D$, where the entry in row $i$ and column $j$ of $D$ is an integer representing the amount of information that the application wants to send from machine/node $i$ to machine/node $j$. A feasible coflow schedule is then a sequence of matchings, which represent the sequence of data transfers that covers $D$. In this work, we investigate coflow scheduling when the size of some of the requested data transfers may be small relative to the amount of data that can be transferred in one round. fractional matchings and/or that employ indirect routing, and compare the relative utility of these options. We design algorithms that perform much better for small demands than the algorithms in the literature that were designed for large data transfers.