MeKi: Memory-based Expert Knowledge Injection for Efficient LLM Scaling

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Edge devices are constrained by limited memory and computational resources, making it challenging to deploy large language models. To address this, this work proposes MeKi, a novel system that introduces a model scaling paradigm decoupling model capacity from computational cost by substituting FLOPs growth with read-only memory (ROM) storage. MeKi embeds memory-based expert modules within each Transformer layer and employs reparameterization to compress trainable parameters into static lookup tables, thereby injecting pre-stored semantic knowledge without incurring any inference latency overhead. Experimental results demonstrate that MeKi significantly enhances generation performance while maintaining inference speed comparable to that of dense models, validating the effectiveness and practicality of memory-based model expansion.

Technology Category

Application Category

📝 Abstract
Scaling Large Language Models (LLMs) typically relies on increasing the number of parameters or test-time computations to boost performance. However, these strategies are impractical for edge device deployment due to limited RAM and NPU resources. Despite hardware constraints, deploying performant LLM on edge devices such as smartphone remains crucial for user experience. To address this, we propose MeKi (Memory-based Expert Knowledge Injection), a novel system that scales LLM capacity via storage space rather than FLOPs. MeKi equips each Transformer layer with token-level memory experts that injects pre-stored semantic knowledge into the generation process. To bridge the gap between training capacity and inference efficiency, we employ a re-parameterization strategy to fold parameter matrices used during training into a compact static lookup table. By offloading the knowledge to ROM, MeKi decouples model capacity from computational cost, introducing zero inference latency overhead. Extensive experiments demonstrate that MeKi significantly outperforms dense LLM baselines with identical inference speed, validating the effectiveness of memory-based scaling paradigm for on-device LLMs. Project homepage is at https://github.com/ningding-o/MeKi.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Edge Deployment
Memory Efficiency
On-device Inference
Model Scaling
Innovation

Methods, ideas, or system contributions that make the work stand out.

memory-based scaling
expert knowledge injection
on-device LLM
re-parameterization
zero latency overhead
🔎 Similar Papers
No similar papers found.