🤖 AI Summary
To address the mismatch between resource-constrained embedded FPGAs and the high computational/memory demands of large language models (LLMs), this work proposes a customized FPGA acceleration architecture for edge deployment. Methodologically, it integrates model partitioning, dataflow optimization, bandwidth-efficient memory access, and dynamic model offloading to enable hardware-algorithm co-design. Evaluated on KV260 and ZCU104 platforms, the architecture reduces LUTs, DSPs, and power consumption by 67%, 39%, and 42%, respectively; overcomes the 4 GB memory barrier to support LLaMA3-8B and long-context inference; achieves throughputs of 4.8–8.6 tokens/s with 93–94% memory bandwidth utilization; and, for the first time, enables end-to-end deployment on cost-effective Spartan UltraScale FPGAs. This work establishes a scalable hardware-software co-design paradigm for efficient LLM deployment on severely resource-constrained edge devices.
📝 Abstract
Deploying large language models (LLMs) on embedded devices remains a significant research challenge due to the high computational and memory demands of LLMs and the limited hardware resources available in such environments. While embedded FPGAs have demonstrated performance and energy efficiency in traditional deep neural networks, their potential for LLM inference remains largely unexplored. Recent efforts to deploy LLMs on FPGAs have primarily relied on large, expensive cloud-grade hardware and have only shown promising results on relatively small LLMs, limiting their real-world applicability. In this work, we present Hummingbird, a novel FPGA accelerator designed specifically for LLM inference on embedded FPGAs. Hummingbird is smaller, targeting embedded FPGAs such as the KV260 and ZCU104 with 67% LUT, 39% DSP, and 42% power savings over existing research. Hummingbird is stronger, targeting LLaMA3-8B and supporting longer contexts, overcoming the typical 4GB memory constraint of embedded FPGAs through offloading strategies. Finally, Hummingbird is faste, achieving 4.8 tokens/s and 8.6 tokens/s for LLaMA3-8B on the KV260 and ZCU104 respectively, with 93-94% model bandwidth utilization, outperforming the prior 4.9 token/s for LLaMA2-7B with 84% bandwidth utilization baseline. We further demonstrate the viability of industrial applications by deploying Hummingbird on a cost-optimized Spartan UltraScale FPGA, paving the way for affordable LLM solutions at the edge.