Principled Coarse-Grained Acceptance for Speculative Decoding in Speech

📅 2025-11-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In speech generation, speculative decoding suffers from low acceptance rates and limited acceleration due to strict discrete phoneme token matching between target and draft models. To address this, we propose Principled Coarse-Graining (PCG), an acoustic-aware coarse-grained verification mechanism: it constructs Acoustic Similarity Groups (ASGs) in the target model’s embedding space and performs overlap-aware rejection sampling at the group level—ensuring group-level accuracy guarantees while improving draft acceptance. PCG integrates acoustic clustering, probability mass splitting, and overlapping distribution modeling. Evaluated on LibriTTS, it significantly improves acceptance rate and throughput over standard speculative decoding and existing speech-specific methods, while preserving speech intelligibility and speaker similarity.

Technology Category

Application Category

📝 Abstract
Speculative decoding accelerates autoregressive speech generation by letting a fast draft model propose tokens that a larger target model verifies. However, for speech LLMs that generate acoustic tokens, exact token matching is overly restrictive: many discrete tokens are acoustically or semantically interchangeable, reducing acceptance rates and limiting speedups. We introduce Principled Coarse-Graining (PCG), which verifies proposals at the level of Acoustic Similarity Groups (ASGs) derived from the target model's embedding space. By splitting each token's probability mass across the overlapping groups that contain it, we define an overlap-aware coarse-grained distribution and perform rejection sampling on the resulting group variable. This yields an exactness guarantee at the group level while allowing the accepted draft token to stand in for any member of the group in practice. On LibriTTS, PCG increases acceptance and throughput relative to standard speculative decoding and prior speech-specific relaxations while maintaining intelligibility and speaker similarity. These results suggest acoustically aware, group-level acceptance as a simple and general way to accelerate speech token generation while maintaining speech quality.
Problem

Research questions and friction points this paper is trying to address.

Accelerates speech generation by relaxing strict token matching requirements
Improves acceptance rates through acoustic similarity group verification
Maintains speech quality while increasing speculative decoding throughput
Innovation

Methods, ideas, or system contributions that make the work stand out.

Acoustic Similarity Groups replace exact token matching
Overlap-aware coarse-grained distribution enables rejection sampling
Group-level acceptance maintains speech quality while accelerating generation
🔎 Similar Papers
No similar papers found.
Moran Yanuka
Moran Yanuka
Tel-Aviv University
machine learningmultimodal learning
P
Paul Dixon
Apple, Tel-Aviv University
E
Eyal Finkelshtein
Apple, Tel-Aviv University
D
Daniel Rotman
Apple, Tel-Aviv University
Raja Giryes
Raja Giryes
Professor, Tel Aviv University
Visual Language ModelsSignal and Image ProcessingGenerative AIDeep Learning