F-BFQ: Flexible Block Floating-Point Quantization Accelerator for LLMs

📅 2025-10-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low inference efficiency and poor hardware adaptability of mixed-block floating-point (BFP) quantized large language models (LLMs) on edge devices, this paper proposes a domain-specific accelerator architecture supporting dynamic BFP mode switching. For the first time, the architecture enables runtime, reconfiguration-free adaptation between two distinct BFP quantization formats. It integrates a customized matrix multiplication unit and co-optimizes with llama.cpp, targeting deployment on the AMD Kria KV260 platform. Experimental evaluation on three BFP-quantized LLMs demonstrates a 1.4× speedup over an Arm NEON CPU baseline, achieving an inference throughput of 5.2 tokens/s (~3.9 words/s). The design significantly improves edge LLM inference efficiency while enhancing compatibility across diverse BFP quantization strategies.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have become increasingly prominent for daily tasks, from improving sound-totext translation to generating additional frames for the latest video games. With the help of LLM inference frameworks, such as llama.cpp, which support optimizations such as KV-caching and quantization, it is now easier than ever to deploy LLMs on edge devices. Quantization is fundamental to enable LLMs on resource-constrained edge devices, and llama.cpp utilizes block floating point (BFP) quantization to drastically reduce the bit width of weights and input tensors, the memory footprint, and the computational power required to run LLMs. LLMs are typically quantized with mixed BFP quantization across the model layers to reduce the loss of model accuracy due to quantization. Therefore, to efficiently accelerate across the layers of BFP-quantized LLMs, specialized accelerators need to support different BFP variants without reconfiguration. To address this issue, we propose a Flexible Block FloatingPoint Quantization (F-BFQ) accelerator, which can dynamically switch between two BFP quantization variants and perform matrix multiplication (MatMul) operations. Our initial F-BFQ accelerator design, deployed on the AMD Kria board, reduces inference time by 1.4x on average over the Arm NEON-based CPU execution across three BFP quantized LLMs while achieving 5.2 tokens per second (~3.9 words per second).
Problem

Research questions and friction points this paper is trying to address.

Accelerating mixed BFP quantization across LLM layers
Supporting multiple BFP variants without hardware reconfiguration
Reducing inference time for quantized LLMs on edge devices
Innovation

Methods, ideas, or system contributions that make the work stand out.

Flexible accelerator supporting multiple BFP variants
Dynamic switching between quantization formats without reconfiguration
Hardware design reducing LLM inference time on edge devices
🔎 Similar Papers
No similar papers found.