PacQ: A SIMT Microarchitecture for Efficient Dataflow in Hyper-asymmetric GEMMs

📅 2025-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the “ultra-asymmetric GEMM” performance bottleneck arising from mixed-precision computation—low-bit INT-weighted matrix multiplication with high-precision FP activations—when deploying large language models (LLMs) on SIMT architectures, this paper proposes an end-to-end microarchitectural optimization. Our method introduces three key innovations: (1) tile-level INT-weight packing co-designed with dataflow orchestration; (2) a custom FP-INT multiply-accumulate unit enabling parallel unpacking and computation of multiple INT weights; and (3) weight repacking and tiling strategies that eliminate dequantization overhead. Evaluated against a conventional SIMT baseline, our design achieves up to 1.99× peak speedup and reduces the energy-delay product (EDP) by 81.4%, significantly improving LLM inference efficiency.

Technology Category

Application Category

📝 Abstract
Weight-only quantization has been widely explored in large language models (LLMs) to reduce memory storage and data loading overhead. During deployment on single-instruction-multiple-threads (SIMT) architectures, weights are stored in low-precision integer (INT) format, while activations remain in full-precision floating-point (FP) format to preserve inference accuracy. Although memory footprint and data loading requirements for weight matrices are reduced, computation performance gains remain limited due to the need to convert weights back to FP format through unpacking and dequantization before GEMM operations. In this work, we investigate methods to accelerate GEMM operations involving packed low-precision INT weights and high-precision FP activations, defining this as the hyper-asymmetric GEMM problem. Our approach co-optimizes tile-level packing and dataflow strategies for INT weight matrices. We further design a specialized FP-INT multiplier unit tailored to our packing and dataflow strategies, enabling parallel processing of multiple INT weights. Finally, we integrate the packing, dataflow, and multiplier unit into PacQ, a SIMT microarchitecture designed to efficiently accelerate hyper-asymmetric GEMMs. We show that PacQ can achieve up to 1.99x speedup and 81.4% reduction in EDP compared to weight-only quantized LLM workloads running on conventional SIMT baselines.
Problem

Research questions and friction points this paper is trying to address.

Accelerate GEMM with low-precision INT weights
Optimize tile-level packing and dataflow strategies
Design specialized FP-INT multiplier for SIMT architecture
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimized INT weight matrix packing
Specialized FP-INT multiplier unit
SIMT microarchitecture for GEMM acceleration
🔎 Similar Papers
No similar papers found.