Learning to Keep a Promise: Scaling Language Model Decoding Parallelism with Learned Asynchronous Decoding

📅 2025-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Autoregressive large language models (LLMs) suffer from slow decoding, and existing parallel decoding methods rely on rigid syntactic heuristics. Method: This paper proposes PASTA, a learning-driven asynchronous parallel decoding framework. It introduces a two-stage fine-tuning procedure enabling the LLM to autonomously identify semantically independent segments and generate a learnable decoding annotation language (PASTA-LANG); a real-time interpreter then enables dynamic, fine-grained parallel scheduling. Contribution/Results: PASTA eliminates hand-crafted rules and is the first method to model parallelism as an intrinsic, learned capability of the LLM, explicitly balancing quality and efficiency. Experiments show that PASTA achieves 1.21×–1.93× geometric mean speedup on AlpacaEval, with length-controlled win-rate changes ranging from +2.2% to −7.1%. Its Pareto frontier dominates all prior parallel decoding approaches.

Technology Category

Application Category

📝 Abstract
Decoding with autoregressive large language models (LLMs) traditionally occurs sequentially, generating one token after another. An emerging line of work explored parallel decoding by identifying and simultaneously generating semantically independent chunks of LLM responses. However, these techniques rely on hand-crafted heuristics tied to syntactic structures like lists and paragraphs, making them rigid and imprecise. We present PASTA, a learning-based system that teaches LLMs to identify semantic independence and express parallel decoding opportunities in their own responses. At its core are PASTA-LANG and its interpreter: PASTA-LANG is an annotation language that enables LLMs to express semantic independence in their own responses; the language interpreter acts on these annotations to orchestrate parallel decoding on-the-fly at inference time. Through a two-stage finetuning process, we train LLMs to generate PASTA-LANG annotations that optimize both response quality and decoding speed. Evaluation on AlpacaEval, an instruction following benchmark, shows that our approach Pareto-dominates existing methods in terms of decoding speed and response quality; our results demonstrate geometric mean speedups ranging from 1.21x to 1.93x with corresponding quality changes of +2.2% to -7.1%, measured by length-controlled win rates against sequential decoding baseline.
Problem

Research questions and friction points this paper is trying to address.

Improving LLM decoding parallelism
Identifying semantic independence
Optimizing response quality and speed
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learned asynchronous decoding system
PASTA-LANG annotation language
Two-stage finetuning optimization
🔎 Similar Papers
No similar papers found.