Hummingbird: A Smaller and Faster Large Language Model Accelerator on Embedded FPGA

📅 2025-07-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the mismatch between resource-constrained embedded FPGAs and the high computational/memory demands of large language models (LLMs), this work proposes a customized FPGA acceleration architecture for edge deployment. Methodologically, it integrates model partitioning, dataflow optimization, bandwidth-efficient memory access, and dynamic model offloading to enable hardware-algorithm co-design. Evaluated on KV260 and ZCU104 platforms, the architecture reduces LUTs, DSPs, and power consumption by 67%, 39%, and 42%, respectively; overcomes the 4 GB memory barrier to support LLaMA3-8B and long-context inference; achieves throughputs of 4.8–8.6 tokens/s with 93–94% memory bandwidth utilization; and, for the first time, enables end-to-end deployment on cost-effective Spartan UltraScale FPGAs. This work establishes a scalable hardware-software co-design paradigm for efficient LLM deployment on severely resource-constrained edge devices.

Technology Category

Application Category

📝 Abstract
Deploying large language models (LLMs) on embedded devices remains a significant research challenge due to the high computational and memory demands of LLMs and the limited hardware resources available in such environments. While embedded FPGAs have demonstrated performance and energy efficiency in traditional deep neural networks, their potential for LLM inference remains largely unexplored. Recent efforts to deploy LLMs on FPGAs have primarily relied on large, expensive cloud-grade hardware and have only shown promising results on relatively small LLMs, limiting their real-world applicability. In this work, we present Hummingbird, a novel FPGA accelerator designed specifically for LLM inference on embedded FPGAs. Hummingbird is smaller, targeting embedded FPGAs such as the KV260 and ZCU104 with 67% LUT, 39% DSP, and 42% power savings over existing research. Hummingbird is stronger, targeting LLaMA3-8B and supporting longer contexts, overcoming the typical 4GB memory constraint of embedded FPGAs through offloading strategies. Finally, Hummingbird is faste, achieving 4.8 tokens/s and 8.6 tokens/s for LLaMA3-8B on the KV260 and ZCU104 respectively, with 93-94% model bandwidth utilization, outperforming the prior 4.9 token/s for LLaMA2-7B with 84% bandwidth utilization baseline. We further demonstrate the viability of industrial applications by deploying Hummingbird on a cost-optimized Spartan UltraScale FPGA, paving the way for affordable LLM solutions at the edge.
Problem

Research questions and friction points this paper is trying to address.

Deploying LLMs on resource-limited embedded devices
Optimizing FPGA for efficient LLM inference
Overcoming memory constraints in embedded FPGA systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

FPGA accelerator for embedded LLM inference
Offloading strategies to overcome memory constraints
High-speed token generation with optimized bandwidth
🔎 Similar Papers
No similar papers found.
J
Jindong Li
Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences; Center for Long-term Artificial Intelligence; Beijing Institute of AI Safety and Governance; School of Artificial Intelligence, University of Chinese Academy of Sciences
Tenglong Li
Tenglong Li
Institute of Automation, Chinese Academy of Sciences
Hardware Architecture
Ruiqi Chen
Ruiqi Chen
Vrije Universiteit Brussel
FPGAsDomain-specific Accelerator
G
Guobin Shen
Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences; Center for Long-term Artificial Intelligence; School of Future Technology, University of Chinese Academy of Sciences
Dongcheng Zhao
Dongcheng Zhao
Beijing Institute of AI Safety and Governance
Spiking Neural NetworksEvent Based VisionBrain-inspired AILLM Safety
Q
Qian Zhang
Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences; Center for Long-term Artificial Intelligence; Beijing Institute of AI Safety and Governance; School of Artificial Intelligence, University of Chinese Academy of Sciences
Y
Yi Zeng
Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences; Center for Long-term Artificial Intelligence; School of Future Technology, University of Chinese Academy of Sciences; Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, Chinese Academy of Sciences