🤖 AI Summary
Deploying large language model (LLM) inference on edge devices faces significant challenges due to high computational overhead and memory pressure. To address this, we propose VEDA, a hardware–software co-design framework. VEDA introduces a voting-based KV cache eviction algorithm enabling O(1)-complexity dynamic cache management; a reconfigurable processing element array with flexible multiplication dataflow to efficiently support variable-length sequences and multidimensional workloads; and element-wise serial scheduling to optimize nonlinear operations such as softmax and LayerNorm. Experimental results demonstrate that VEDA substantially reduces inference latency and hardware resource consumption. On edge platforms, it achieves superior energy efficiency compared to state-of-the-art approaches, thereby enhancing real-time responsiveness and strengthening on-device privacy preservation for localized LLM inference.
📝 Abstract
Large Language Models (LLMs) excel in natural language processing tasks but pose significant computational and memory challenges for edge deployment due to their intensive resource demands. This work addresses the efficiency of LLM inference by algorithm-hardware-dataflow tri-optimizations. We propose a novel voting-based KV cache eviction algorithm, balancing hardware efficiency and algorithm accuracy by adaptively identifying unimportant kv vectors. From a dataflow perspective, we introduce a flexible-product dataflow and a runtime reconfigurable PE array for matrix-vector multiplication. The proposed approach effectively handles the diverse dimensional requirements and solves the challenges of incrementally varying sequence lengths. Additionally, an element-serial scheduling scheme is proposed for nonlinear operations, such as softmax and layer normalization (layernorm). Results demonstrate a substantial reduction in latency, accompanied by a significant decrease in hardware complexity, from O(N) to O(1). The proposed solution is realized in a custom-designed accelerator, VEDA, which outperforms existing hardware platforms. This research represents a significant advancement in LLM inference on resource-constrained edge devices, facilitating real-time processing, enhancing data privacy, and enabling model customization.