PLM: Efficient Peripheral Language Models Hardware-Co-Designed for Ubiquitous Computing

📅 2025-03-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the mismatch between the high inference overhead of large language models (LLMs) and the stringent storage, bandwidth, and power constraints of edge devices, this paper proposes EcoLLM—a lightweight, efficient language model tailored for resource-constrained edge environments. Methodologically, EcoLLM introduces three key innovations: (1) multi-head implicit attention, (2) quadratic ReLU-based sparse activation, and (3) a two-stage RLHF fine-tuning framework enhanced by ARIES and guided by the WSDC adaptive learning rate scheduler. Experiments demonstrate that EcoLLM achieves +9% and +11% accuracy gains on GSM8K and code-generation benchmarks, respectively, outperforming existing open-source small models of comparable parameter count. It enables real-time inference on consumer-grade GPUs, Android smartphones, and Raspberry Pi, achieving the lowest activated parameter count among deployed edge LLMs. The core contribution lies in a hardware-aware, ultra-sparse activation design coupled with an efficient alignment paradigm—delivering practical, deployable language understanding and generation capabilities for edge intelligence.

Technology Category

Application Category

📝 Abstract
While scaling laws have been continuously validated in large language models (LLMs) with increasing model parameters, the inherent tension between the inference demands of LLMs and the limited resources of edge devices poses a critical challenge to the development of edge intelligence. Recently, numerous small language models have emerged, aiming to distill the capabilities of LLMs into smaller footprints. However, these models often retain the fundamental architectural principles of their larger counterparts, still imposing considerable strain on the storage and bandwidth capacities of edge devices. In this paper, we introduce the PLM, a Peripheral Language Model, developed through a co-design process that jointly optimizes model architecture and edge system constraints. The PLM utilizes a Multi-head Latent Attention mechanism and employs the squared ReLU activation function to encourage sparsity, thereby reducing peak memory footprint during inference. During training, we collect and reorganize open-source datasets, implement a multi-phase training strategy, and empirically investigate the Warmup-Stable-Decay-Constant (WSDC) learning rate scheduler. Additionally, we incorporate Reinforcement Learning from Human Feedback (RLHF) by adopting the ARIES preference learning approach. Following a two-phase SFT process, this method yields performance gains of 2% in general tasks, 9% in the GSM8K task, and 11% in coding tasks. In addition to its novel architecture, evaluation results demonstrate that PLM outperforms existing small language models trained on publicly available data while maintaining the lowest number of activated parameters. Furthermore, deployment across various edge devices, including consumer-grade GPUs, mobile phones, and Raspberry Pis, validates PLM's suitability for peripheral applications. The PLM series models are publicly available at https://github.com/plm-team/PLM.
Problem

Research questions and friction points this paper is trying to address.

Addresses the challenge of deploying large language models on resource-limited edge devices.
Introduces a co-designed Peripheral Language Model (PLM) optimized for edge system constraints.
Demonstrates PLM's superior performance and efficiency across various edge devices.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Co-design optimizes model architecture and edge constraints.
Multi-head Latent Attention reduces memory footprint.
Reinforcement Learning from Human Feedback enhances performance.
🔎 Similar Papers
No similar papers found.
Cheng Deng
Cheng Deng
University of Edinburgh
On-device LLMNLPGeoAI
Luoyang Sun
Luoyang Sun
Institute of Automation, Chinese Academy of Sciences
Machine Learning
Jiwen Jiang
Jiwen Jiang
Institute of Automation, Chinese Academy of Sciences
Large Language ModelReinforcement Learning
Yongcheng Zeng
Yongcheng Zeng
University of Chinese Academy of Sciences
LLMReinforcement Learning
X
Xinjian Wu
University College London
W
Wenxin Zhao
The Hong Kong University of Science and Technology (Guangzhou)
Qingfa Xiao
Qingfa Xiao
The Phd student of Hong Kong University of Science and Technology (Guangzhou)
Natural Language ProcessingContrastive LearningLarge Language Model
J
Jiachuan Wang
The Hong Kong University of Science and Technology
L
Lei Chen
The Hong Kong University of Science and Technology (Guangzhou), The Hong Kong University of Science and Technology
L
Lionel M. Ni
The Hong Kong University of Science and Technology (Guangzhou)
H
Haifeng Zhang
Institution of Automation, Chinese Academy of Sciences
J
Jun Wang
University College London, The Hong Kong University of Science and Technology (Guangzhou)