π€ AI Summary
This work addresses the high inter-server communication overhead imposed by conventional algorithms such as ring all-reduce in large-scale distributed learning, which severely limits training efficiency. The authors propose OptINC, a novel architecture that deeply integrates optical interconnect networks with distributed training by leveraging MachβZehnder interferometers to perform gradient aggregation and quantization directly in the optical domain, thereby unifying communication and computation. By approximating neural network weights with unitary and diagonal matrices through optical neural networks, and combining optical-domain preprocessing with hardware-aware training, OptINC significantly reduces both communication overhead and hardware costs while preserving model accuracy. Experiments demonstrate that OptINC achieves accuracy comparable to ring all-reduce on ResNet50/CIFAR-100 and LLaMA/Wikipedia-1B benchmarks, while entirely eliminating communication latency.
π Abstract
Distributed learning is widely used for training large models on large datasets by distributing parts of the model or dataset across multiple devices and aggregating the computed results for subsequent computations or parameter updates. Existing communication algorithms for distributed learning such as ring all-reduce result in heavy communication overhead between servers. Since communication in large-scale systems uses optical fibers, we propose an Optical In-Network-Computing (OptINC) architecture to offload the computation in servers onto the optical interconnects. To execute gradient averaging and quantization in the optical domain, we incorporate optical devices such as Mach-Zehnder-Interferometers (MZIs) into the interconnects. Such a de facto optical neural network (ONN) can effectively reduce the communication overhead in existing distributed training solutions. To reduce dataset complexity for training this neural network, a preprocessing algorithm implemented in the optical domain is also proposed. Hardware cost is lowered by approximating the weight matrices of the optical neural network with unitary and diagonal matrices, while the accuracy is maintained by a proposed hardware-aware training algorithm. The proposed solution was evaluated on real distributed learning tasks, including ResNet50 on CIFAR-100, and a LLaMA-based network on Wikipedia-1B. In both cases, the proposed framework can achieve comparable training accuracy to the ring all-reduce baseline, while eliminating communication overhead.