🤖 AI Summary
This work systematically investigates the efficacy of layer-wise early exiting in modern large language models across diverse architectures—including dense Transformers, Mixture-of-Experts (MoE), and state space models—and reveals a significant decline in its effectiveness for balancing accuracy and inference efficiency. The study introduces a novel metric to quantify the intrinsic suitability of models for early exiting and establishes the first universal benchmark for evaluating early-exit decoding strategies. Through analysis of intermediate representation dynamics and confidence-driven exit mechanisms, the findings demonstrate that newer large models generally yield diminishing returns from early exiting, with dense Transformers exhibiting greater potential than alternative architectures. Moreover, base pre-trained models exceeding 20 billion parameters are shown to benefit more substantially from early-exit approaches.
📝 Abstract
In Large Language Model (LLM) inference, early-exit refers to stopping computation at an intermediate layer once the prediction is sufficiently confident, thereby reducing latency and cost. However, recent LLMs adopt improved pretraining recipes and architectures that reduce layer redundancy, potentially limiting early-exit opportunities. We re-evaluate layer-wise early-exit in modern LLMs and analyze how intermediate representations evolve during training. We introduce a metric to quantify a model's intrinsic suitability for early-exit and propose a benchmark for researchers to explore the potential early-exit benefits on different models and workloads. Our results show a diminishing trend in early-exit effectiveness across newer model generations. We further find that dense transformers generally offer greater early-exit potential than Mixture-of-Experts and State Space Models. In addition, larger models, particularly those with more than 20 billion parameters, and base pretrained models without specialized tuning tend to exhibit higher early-exit potential.