ShishuLM: Lightweight Language Model with Hybrid Decoder-MLP Architecture and Paired Weight Sharing

πŸ“… 2025-10-13
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the challenges of large parameter counts, high KV cache overhead, and low inference efficiency in Transformer-based language models, this paper proposes ShishuLMβ€”a lightweight language model tailored for medium-context scenarios. Methodologically, ShishuLM introduces three key innovations: (1) a hybrid decoder-MLP architecture that approximates full Transformer decoder blocks using MLP modules; (2) pairwise inter-layer weight sharing to significantly reduce parameter count; and (3) a normalization-aware linear attention approximation enabling dynamic layer pruning during inference. Experimental results demonstrate that ShishuLM achieves competitive performance while reducing both parameter count and KV cache footprint by approximately 25%, and improving training and inference latency by up to 40%. These advances establish a new paradigm for efficient deployment of compact language models.

Technology Category

Application Category

πŸ“ Abstract
While the transformer architecture has achieved state-of-the-art performance on natural language processing tasks, these models impose substantial memory and computational overhead. Recent research has identified significant architectural redundancies within these models, presenting opportunities for optimization without compromising performance. Taking insights from research in AI interpretability and inference-time layer pruning, we introduce an efficient language model architecture, referred to as ShishuLM, which reduces both the parameter count and Key-Value (KV) cache requirements. Given the increasing importance of Small Language Models (SLMs) in agentic AI systems, we evaluate our approach on two SLMs of different scales. Our analysis reveals that for moderate-context scenarios, normalization coupled with attention computation is roughly linear with the input, enabling entire transformer blocks to be approximated through Multi-Layer Perceptrons (MLPs). Our results show that ShishuLM provides up to 25% reduction in memory requirements and up to 40% improvement in latency during both training and inference, compared to parent models. Our experimental and analytical findings provide insights towards building more efficient SLM architectures from a pre-training standpoint.
Problem

Research questions and friction points this paper is trying to address.

Reducing transformer model memory and computational overhead
Optimizing architecture without compromising language model performance
Improving efficiency of Small Language Models for agentic AI
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid decoder-MLP architecture reduces model redundancy
Paired weight sharing minimizes parameter count and KV cache
MLP approximations replace transformer blocks for efficiency
πŸ”Ž Similar Papers
No similar papers found.
S
Shivanshu Kumar
Department of Computer Science and Engineering, Indian Institute of Technology, Madras
Gopalakrishnan Srinivasan
Gopalakrishnan Srinivasan
Assistant Professor at IIT Madras
RISC-V SoCAI Accelerator ArchitecturesDeep LearningSpiking Neural Networks