BitROM: Weight Reload-Free CiROM Architecture Towards Billion-Parameter 1.58-bit LLM Inference

📅 2025-09-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the area and scalability bottlenecks of CiROM accelerators for deploying large language models (LLMs) on edge devices—e.g., LLaMA-7B requiring >1000 cm² silicon area—this work proposes BitROM, a hardware architecture co-designed with the 1.58-bit ternary-quantized BitNet model. Key innovations include: (1) a bidirectional ROM array storing two ternary weights per transistor; (2) a tri-mode local accumulator; (3) integrated decode-refresh eDRAM enabling on-chip KV caching; and (4) embedded LoRA adapters for efficient transfer learning. Fabricated in 65 nm CMOS, BitROM achieves 20.8 TOPS/W energy efficiency and 4967 kB/mm² bit density—yielding a 10× improvement in area efficiency and reducing off-chip DRAM accesses by 43.6%. To our knowledge, this is the first architecture enabling efficient billion-parameter LLM inference on resource-constrained edge platforms.

Technology Category

Application Category

📝 Abstract
Compute-in-Read-Only-Memory (CiROM) accelerators offer outstanding energy efficiency for CNNs by eliminating runtime weight updates. However, their scalability to Large Language Models (LLMs) is fundamentally constrained by their vast parameter sizes. Notably, LLaMA-7B - the smallest model in LLaMA series - demands more than 1,000 cm2 of silicon area even in advanced CMOS nodes. This paper presents BitROM, the first CiROM-based accelerator that overcomes this limitation through co-design with BitNet's 1.58-bit quantization model, enabling practical and efficient LLM inference at the edge. BitROM introduces three key innovations: 1) a novel Bidirectional ROM Array that stores two ternary weights per transistor; 2) a Tri-Mode Local Accumulator optimized for ternary-weight computations; and 3) an integrated Decode-Refresh (DR) eDRAM that supports on-die KV-cache management, significantly reducing external memory access during decoding. In addition, BitROM integrates LoRA-based adapters to enable efficient transfer learning across various downstream tasks. Evaluated in 65nm CMOS, BitROM achieves 20.8 TOPS/W and a bit density of 4,967 kB/mm2 - offering a 10x improvement in area efficiency over prior digital CiROM designs. Moreover, the DR eDRAM contributes to a 43.6% reduction in external DRAM access, further enhancing deployment efficiency for LLMs in edge applications.
Problem

Research questions and friction points this paper is trying to address.

Scalability limitation of CiROM accelerators for billion-parameter LLMs
Excessive silicon area requirements for LLM inference in edge devices
High external memory access during decoding in traditional architectures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bidirectional ROM Array stores two ternary weights per transistor
Tri-Mode Local Accumulator for ternary-weight computations
Integrated Decode-Refresh eDRAM for on-die KV-cache management
🔎 Similar Papers
No similar papers found.