π€ AI Summary
Existing LLM inference architectures employ a fixed-depth Transformer stack for all tokens, despite empirical evidence that computational depth requirements vary significantly across tokens during autoregressive generation. Method: This paper proposes FlexiDepth, a plug-and-play dynamic layer-skipping framework. It introduces a lightweight, parameter-efficient router and adapter module that enables token-wise adaptive layer skipping without modifying the base modelβs parameters. Contribution/Results: FlexiDepth is the first to empirically characterize and exploit the heterogeneity in per-token depth demand, designing an intuitive, human-aligned routing mechanism. Evaluated on Llama-3-8B, it skips on average 8 of 32 layers, achieving a 25% inference speedup while preserving 100% accuracy on standard benchmarks. The authors release an open-source dataset of layer allocation annotations, implementation code, and full training/inference tooling.
π Abstract
Various layer-skipping methods have been proposed to accelerate token generation in large language models (LLMs). However, they have overlooked a fundamental question: How do computational demands vary across the generation of different tokens? In this work, we introduce FlexiDepth, a method that dynamically adjusts the number of Transformer layers used in text generation. By incorporating a plug-in router and adapter, FlexiDepth enables adaptive layer-skipping in LLMs without modifying their original parameters. Introducing FlexiDepth to Llama-3-8B model achieves layer skipping of 8 layers out of 32, and meanwhile maintains the full 100% benchmark performance. Experimental results with FlexiDepth demonstrate that computational demands in LLMs significantly vary based on token type. Specifically, generating repetitive tokens or fixed phrases requires fewer layers, whereas producing tokens involving computation or high uncertainty requires more layers. Interestingly, this adaptive allocation pattern aligns with human intuition. To advance research in this area, we open sourced FlexiDepth and a dataset documenting FlexiDepth's layer allocation patterns for future exploration.