🤖 AI Summary
Deploying large language models (LLMs) on edge FPGAs faces challenges including high computational and memory overhead, limited on-chip resources, stringent power constraints, and long prefill latency. To address these, this work proposes an end-to-end ternary (1.58-bit) LLM inference acceleration framework. Our method introduces a lookup-table-based ternary matrix multiplication engine, integrated with fine-grained URAM-based weight caching, a streaming dataflow architecture, attention-reordered prefill scheduling, and a dedicated decoding unit. We further employ grouped activation online precomputation, 8-bit activation quantization, and fusion of floating-point and linear operators. Under a 5 W power budget, our implementation achieves up to 25 tokens/s decoding throughput and first-token latency of 0.45–0.96 seconds—demonstrating substantial improvements in energy efficiency and real-time performance for edge-deployed LLMs.
📝 Abstract
With the emergence of wearable devices and other embedded systems, deploying large language models (LLMs) on edge platforms has become an urgent need. However, this is challenging because of their high computational and memory demands. Although recent low-bit quantization methods (e.g., BitNet, DeepSeek) compress weights to as low as 1.58~bits with minimal accuracy loss, edge deployment is still constrained by limited on-chip resources, power budgets, and the often-neglected long latency of the prefill stage. We present extbf{TeLLMe}, the first table-lookup-based ternary LLM accelerator for low-power edge FPGAs that fully supports both prefill and autoregressive decoding using 1.58-bit weights and 8-bit activations. TeLLMe incorporates several novel techniques, including (1) a table-lookup-based ternary matrix multiplication (TLMM) engine utilizing grouped activations and online precomputation for low resource utilization and high throughput; (2) a fine-grained analytic URAM-based weight buffer management scheme for efficient loading and compute engine access; (3) a streaming dataflow architecture that fuses floating-point element-wise operations with linear computations to hide latency; (4) a reversed-reordered prefill stage attention with fused attention operations for high memory efficiency; and (5) a resource-efficient specialized decoding stage attention. Under a 5~W power budget, TeLLMe delivers up to 25~tokens/s decoding throughput and 0.45--0.96~s time-to-first-token (TTFT) for 64--128 token prompts, marking a significant energy-efficiency advancement in LLM inference on edge FPGAs.