🤖 AI Summary
To address the bandwidth bottleneck in cross-GPU communication during distributed training and inference of large language models (LLMs), this paper proposes the first efficient communication paradigm supporting arbitrary bit-width quantization—down to 2 bits. Methodologically, it innovatively integrates bit decomposition (mapping non-native bit widths onto hardware-supported primitives) with spike preservation (explicit retention of extreme values to suppress quantization error), under a holistic hardware-software co-design framework compatible with NVLink and PCIe interconnects. The approach optimizes both AllReduce and All2All collective communication primitives. Experimental results demonstrate up to 3.2× speedup for AllReduce and 2× acceleration for All2All, while maintaining acceptable accuracy degradation. This significantly enhances communication flexibility, hardware resource utilization, and the practical limits of ultra-low-bit quantization in LLM distributed systems.
📝 Abstract
Nowadays, communication bottlenecks have emerged as a critical challenge in the distributed training and deployment of large language models (LLMs). This paper introduces FlashCommunication V2, a novel communication paradigm enabling efficient cross-GPU transmission at arbitrary bit widths. Its core innovations lie in the proposed bit splitting and spike reserving techniques, which address the challenges of low-bit quantization. Bit splitting decomposes irregular bit widths into basic units, ensuring compatibility with hardware capabilities and thus enabling transmission at any bit width. Spike reserving, on the other hand, retains numerical outliers (i.e., minima and maxima) as floating-point numbers, which shrinks the dynamic numerical range and pushes the quantization limits to 2-bit with acceptable losses. FlashCommunication V2 significantly enhances the flexibility and resource utilization of communication systems. Through meticulous software-hardware co-design, it delivers robust performance and reduced overhead across both NVLink-based and PCIe-based architectures, achieving a maximum 3.2$ imes$ speedup in AllReduce and 2$ imes$ in All2All communication.