🤖 AI Summary
In multi-tenant ML data centers, insufficient inter-accelerator electrical interconnect bandwidth leads to resource underutilization, computational fragmentation, and fault propagation. To address these challenges, this paper proposes Morphlux, a server-level programmable optical interconnect architecture. Morphlux integrates on-chip photonics with a programmable optical switch network, enabling runtime reconfiguration and logical replacement of faulty accelerator chips. Evaluated via an end-to-end hardware prototype, Morphlux achieves a 1.72× throughput improvement in model training workloads, up to 66% higher allocated interconnect bandwidth, and a 70% reduction in computational fragmentation. Its key contribution lies in being the first to introduce programmable optical interconnects into multi-accelerator servers—simultaneously delivering high bandwidth, low fragmentation, and fault tolerance—thereby significantly enhancing system-level utilization of heterogeneous AI accelerators.
📝 Abstract
We optically interconnect accelerator chips (e.g., GPUs, TPUs) within compute servers using newly viable programmable chip-to-chip photonic fabrics. In contrast, today, commercial multi-accelerator compute servers that are workhorses of ML, use electrical interconnects to network accelerator chips in the server. However, recent trends have shown an interconnect bandwidth wall caused by accelerator FLOPS scaling at a faster rate than the bandwidth of the interconnect between accelerators in the same server. This has led to under-utilization and idling of GPU resources in cloud datacenters. We develop Morphlux, a server-scale programmable photonic fabric, to interconnect accelerators within servers. We show that augmenting state-of-the-art photonic ML-centric datacenters with Morphlux can improve the bandwidth of tenant compute allocations by up to 66% and reduce compute fragmentation by up to 70%. We develop a novel end-to-end hardware prototype of Morphlux to demonstrate these performance benefits, which translate to 1.72x improvement in training throughput of ML models. By rapidly programming the server-scale fabric in our hardware testbed, Morphlux can logically replace a failed accelerator chip in 1.2 seconds.