🤖 AI Summary
To address the “ultra-asymmetric GEMM” performance bottleneck arising from mixed-precision computation—low-bit INT-weighted matrix multiplication with high-precision FP activations—when deploying large language models (LLMs) on SIMT architectures, this paper proposes an end-to-end microarchitectural optimization. Our method introduces three key innovations: (1) tile-level INT-weight packing co-designed with dataflow orchestration; (2) a custom FP-INT multiply-accumulate unit enabling parallel unpacking and computation of multiple INT weights; and (3) weight repacking and tiling strategies that eliminate dequantization overhead. Evaluated against a conventional SIMT baseline, our design achieves up to 1.99× peak speedup and reduces the energy-delay product (EDP) by 81.4%, significantly improving LLM inference efficiency.
📝 Abstract
Weight-only quantization has been widely explored in large language models (LLMs) to reduce memory storage and data loading overhead. During deployment on single-instruction-multiple-threads (SIMT) architectures, weights are stored in low-precision integer (INT) format, while activations remain in full-precision floating-point (FP) format to preserve inference accuracy. Although memory footprint and data loading requirements for weight matrices are reduced, computation performance gains remain limited due to the need to convert weights back to FP format through unpacking and dequantization before GEMM operations. In this work, we investigate methods to accelerate GEMM operations involving packed low-precision INT weights and high-precision FP activations, defining this as the hyper-asymmetric GEMM problem. Our approach co-optimizes tile-level packing and dataflow strategies for INT weight matrices. We further design a specialized FP-INT multiplier unit tailored to our packing and dataflow strategies, enabling parallel processing of multiple INT weights. Finally, we integrate the packing, dataflow, and multiplier unit into PacQ, a SIMT microarchitecture designed to efficiently accelerate hyper-asymmetric GEMMs. We show that PacQ can achieve up to 1.99x speedup and 81.4% reduction in EDP compared to weight-only quantized LLM workloads running on conventional SIMT baselines.