Return of the Encoder: Maximizing Parameter Efficiency for SLMs

📅 2025-01-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address high first-token latency and low throughput of small models (≤1B) on edge devices, this work revisits the encoder-decoder architecture for its efficiency advantages under resource constraints. We propose a task-adaptive knowledge distillation framework that enables lightweight encoder-decoder student models to effectively inherit capabilities from large decoder-only teachers while preserving their intrinsic properties—single-pass input encoding and decoupled understanding and generation. This is the first systematic validation of such architectures on asymmetric-sequence tasks. Integrated with RoPE, cross-platform optimization (GPU/CPU/NPU), and fused visual encoders, our approach achieves a 47% reduction in first-token latency, a 4.7× throughput improvement, and an average +6-point gain in task-specific performance—particularly pronounced in tasks with large input-output distribution mismatch.

Technology Category

Application Category

📝 Abstract
The dominance of large decoder-only language models has overshadowed encoder-decoder architectures, despite their fundamental efficiency advantages in sequence processing. For small language models (SLMs) - those with 1 billion parameters or fewer - our systematic analysis across GPU, CPU, and NPU platforms reveals that encoder-decoder architectures achieve 47% lower first-token latency and 4.7x higher throughput compared to decoder-only models on edge devices. These gains may be attributed to encoder-decoder's one-time input processing and efficient separation of understanding and generation phases. We introduce a novel knowledge distillation framework that enables encoder-decoder models to leverage capabilities from large scalable decoder-only teachers while preserving their architectural advantages, achieving up to 6 average performance points improvement across diverse tasks, with significant gains in asymmetric sequence tasks where input and output distributions can benefit from different processing approaches. When combined with modern advances like Rotary Positional Embeddings (RoPE) and Vision encoders, our systematic investigation demonstrates that encoder-decoder architectures provide a more practical path toward deploying capable language models in resource-constrained environments. Our findings challenge the prevailing trend toward decoder-only scaling, showing that architectural choices become increasingly crucial as parameter budgets decrease, particularly for on-device and edge deployments where computational efficiency is paramount.
Problem

Research questions and friction points this paper is trying to address.

Encoder-decoder efficiency for small language models on edge devices
Knowledge distillation from decoder-only teachers to encoder-decoder models
Optimizing architectural choices for resource-constrained deployment environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Encoder-decoder architecture for efficient sequence processing
Knowledge distillation from decoder-only teacher models
Integration of RoPE and Vision encoders for deployment
🔎 Similar Papers
No similar papers found.