π€ AI Summary
To address the challenges of large parameter counts, high KV cache overhead, and low inference efficiency in Transformer-based language models, this paper proposes ShishuLMβa lightweight language model tailored for medium-context scenarios. Methodologically, ShishuLM introduces three key innovations: (1) a hybrid decoder-MLP architecture that approximates full Transformer decoder blocks using MLP modules; (2) pairwise inter-layer weight sharing to significantly reduce parameter count; and (3) a normalization-aware linear attention approximation enabling dynamic layer pruning during inference. Experimental results demonstrate that ShishuLM achieves competitive performance while reducing both parameter count and KV cache footprint by approximately 25%, and improving training and inference latency by up to 40%. These advances establish a new paradigm for efficient deployment of compact language models.
π Abstract
While the transformer architecture has achieved state-of-the-art performance on natural language processing tasks, these models impose substantial memory and computational overhead. Recent research has identified significant architectural redundancies within these models, presenting opportunities for optimization without compromising performance. Taking insights from research in AI interpretability and inference-time layer pruning, we introduce an efficient language model architecture, referred to as ShishuLM, which reduces both the parameter count and Key-Value (KV) cache requirements. Given the increasing importance of Small Language Models (SLMs) in agentic AI systems, we evaluate our approach on two SLMs of different scales. Our analysis reveals that for moderate-context scenarios, normalization coupled with attention computation is roughly linear with the input, enabling entire transformer blocks to be approximated through Multi-Layer Perceptrons (MLPs). Our results show that ShishuLM provides up to 25% reduction in memory requirements and up to 40% improvement in latency during both training and inference, compared to parent models. Our experimental and analytical findings provide insights towards building more efficient SLM architectures from a pre-training standpoint.