Masked Gated Linear Unit

📅 2025-06-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
GLU incurs double memory read overhead in large-model FFNs due to duplicated weight matrix accesses. To address this, we propose Masked GLU (MGLU), which unifies the gating and value pathways onto a single weight matrix via element-wise binary masking—eliminating redundant parameters and memory accesses. We further introduce a Mixture-of-Element-wise-Gates (MoEG) architecture and a hardware-optimized FlashMGLU CUDA kernel to enable fine-grained dynamic sparsity. Our method supports end-to-end differentiable learning of binary masks, jointly optimizing model compression and accuracy. Experiments on an RTX 5090 show that MGLU reduces memory footprint by 47% and improves throughput by 34% over standard GLU, achieving up to 19.7× end-to-end inference speedup. Its SwiMGLU variant maintains or exceeds baseline accuracy while enabling efficient deployment.

Technology Category

Application Category

📝 Abstract
Gated Linear Units (GLUs) have become essential components in the feed-forward networks of state-of-the-art Large Language Models (LLMs). However, they require twice as many memory reads compared to feed-forward layers without gating, due to the use of separate weight matrices for the gate and value streams. To address this bottleneck, we introduce Masked Gated Linear Units (MGLUs), a novel family of GLUs with an efficient kernel implementation. The core contribution of MGLUs include: (1) the Mixture of Element-wise Gating (MoEG) architecture that learns multiple binary masks, each determining gate or value assignments at the element level on a single shared weight matrix resulting in reduced memory transfer, and (2) FlashMGLU, a hardware-friendly kernel that yields up to a 19.7 $ imes$ inference-time speed-up over a naive PyTorch MGLU and is 47% more memory-efficient and 34% faster than standard GLUs despite added architectural complexity on an RTX5090 GPU. In LLM experiments, the Swish-activated variant SwiMGLU preserves its memory advantages while matching - or even surpassing - the downstream accuracy of the SwiGLU baseline.
Problem

Research questions and friction points this paper is trying to address.

Reducing memory reads in GLUs for LLMs
Improving memory efficiency in feed-forward networks
Enhancing inference speed without sacrificing accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Masked Gated Linear Units reduce memory transfer
Mixture of Element-wise Gating uses shared weights
FlashMGLU kernel boosts speed and memory efficiency
🔎 Similar Papers