🤖 AI Summary
In speech generation, speculative decoding suffers from low acceptance rates and limited acceleration due to strict discrete phoneme token matching between target and draft models. To address this, we propose Principled Coarse-Graining (PCG), an acoustic-aware coarse-grained verification mechanism: it constructs Acoustic Similarity Groups (ASGs) in the target model’s embedding space and performs overlap-aware rejection sampling at the group level—ensuring group-level accuracy guarantees while improving draft acceptance. PCG integrates acoustic clustering, probability mass splitting, and overlapping distribution modeling. Evaluated on LibriTTS, it significantly improves acceptance rate and throughput over standard speculative decoding and existing speech-specific methods, while preserving speech intelligibility and speaker similarity.
📝 Abstract
Speculative decoding accelerates autoregressive speech generation by letting a fast draft model propose tokens that a larger target model verifies. However, for speech LLMs that generate acoustic tokens, exact token matching is overly restrictive: many discrete tokens are acoustically or semantically interchangeable, reducing acceptance rates and limiting speedups. We introduce Principled Coarse-Graining (PCG), which verifies proposals at the level of Acoustic Similarity Groups (ASGs) derived from the target model's embedding space. By splitting each token's probability mass across the overlapping groups that contain it, we define an overlap-aware coarse-grained distribution and perform rejection sampling on the resulting group variable. This yields an exactness guarantee at the group level while allowing the accepted draft token to stand in for any member of the group in practice. On LibriTTS, PCG increases acceptance and throughput relative to standard speculative decoding and prior speech-specific relaxations while maintaining intelligibility and speaker similarity. These results suggest acoustically aware, group-level acceptance as a simple and general way to accelerate speech token generation while maintaining speech quality.