🤖 AI Summary
Current AI research disproportionately emphasizes decoder-only large language models (LLMs), overlooking the scalability potential of encoder-decoder architectures. Method: This work conducts a systematic evaluation of scaling behavior, out-of-distribution context length extrapolation, and inference efficiency across both architectures. It is the first to uniformly apply prefix language modeling, RedPajama V1 pretraining, and FLAN-style instruction tuning to encoder-decoder models, enabling rigorous head-to-head comparison with decoder-only counterparts across a 150M–8B parameter range. Contribution/Results: Instruction-tuned encoder-decoder models match or exceed the downstream task performance of same-scale decoder-only models; achieve 40–60% lower inference latency; and demonstrate superior context length extrapolation. These findings reveal the underappreciated efficiency and scalability of encoder-decoder architectures, providing novel empirical evidence to inform architectural choices in large model development.
📝 Abstract
Recent large language model (LLM) research has undergone an architectural shift from encoder-decoder modeling to nowadays the dominant decoder-only modeling. This rapid transition, however, comes without a rigorous comparative analysis especially extit{from the scaling perspective}, raising concerns that the potential of encoder-decoder models may have been overlooked. To fill this gap, we revisit encoder-decoder LLM (RedLLM), enhancing it with recent recipes from decoder-only LLM (DecLLM). We conduct a comprehensive comparison between RedLLM, pretrained with prefix language modeling (LM), and DecLLM, pretrained with causal LM, at different model scales, ranging from $sim$150M to $sim$8B. Using RedPajama V1 (1.6T tokens) for pretraining and FLAN for instruction tuning, our experiments show that RedLLM produces compelling scaling properties and surprisingly strong performance. While DecLLM is overall more compute-optimal during pretraining, RedLLM demonstrates comparable scaling and context length extrapolation capabilities. After instruction tuning, RedLLM achieves comparable and even better results on various downstream tasks while enjoying substantially better inference efficiency. We hope our findings could inspire more efforts on re-examining RedLLM, unlocking its potential for developing powerful and efficient LLMs.