🤖 AI Summary
To address memory constraints and high multi-token inference latency in deploying large language and vision models on edge and mid-tier GPUs (e.g., Jetson Orin Nano, A40), this work proposes a hardware-software co-optimization framework. First, it pioneers the application of the Roofline model to diagnose computational bottlenecks in Block Low-Rank (BLR)-compressed model inference. Second, it designs partially fused Triton kernels and custom memory layouts to overcome PyTorch’s compilation limitations under memory-bandwidth-bound conditions. Evaluated on Llama-7B/1B, GPT2-S, DiT-XL/2, and ViT-B, the approach achieves up to 3.76× inference speedup and 3× model compression over PyTorch dense baselines—significantly reducing both end-to-end latency and GPU memory footprint on resource-constrained devices.
📝 Abstract
Recent advances in transformer-based foundation models have made them the default choice for many tasks, but their rapidly growing size makes fitting a full model on a single GPU increasingly difficult and their computational cost prohibitive. Block low-rank (BLR) compression techniques address this challenge by learning compact representations of weight matrices. While traditional low-rank (LR) methods often incur sharp accuracy drops, BLR approaches such as Monarch and BLAST can better capture the underlying structure, thus preserving accuracy while reducing computations and memory footprints. In this work, we use roofline analysis to show that, although BLR methods achieve theoretical savings and practical speedups for single-token inference, multi-token inference often becomes memory-bound in practice, increasing latency despite compiler-level optimizations in PyTorch. To address this, we introduce custom Triton kernels with partial fusion and memory layout optimizations for both Monarch and BLAST. On memory-constrained NVIDIA GPUs such as Jetson Orin Nano and A40, our kernels deliver up to $3.76 imes$ speedups and $3 imes$ model size compression over PyTorch dense baselines using CUDA backend and compiler-level optimizations, while supporting various models including Llama-7/1B, GPT2-S, DiT-XL/2, and ViT-B. Our code is available at https://github.com/pabillam/mem-efficient-blr .