Frame-Stacked Local Transformers For Efficient Multi-Codebook Speech Generation

📅 2025-09-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the strong inter-codebook dependencies and the fidelity-efficiency trade-off in parallel multi-codebook speech generation, this paper proposes the Frame-Stacked Local Transformer (FS-LT). FS-LT jointly decodes multiple frames via frame stacking, employs local attention to model inter-codebook dependencies, and integrates autoregressive initialization with MaskGIT-style iterative refinement—enabling high-fidelity codebook prediction in a single forward pass. Compared to fully parallel methods, FS-LT significantly improves speech fidelity; relative to autoregressive baselines, it accelerates decoding by 2.3–4.1× while preserving subjective quality on par with state-of-the-art autoregressive models. Extensive experiments across multiple benchmarks demonstrate FS-LT’s superior Pareto-optimality on the quality–efficiency trade-off curve. Furthermore, the work provides practical, configurable decoding strategies for real-world deployment.

Technology Category

Application Category

📝 Abstract
Speech generation models based on large language models (LLMs) typically operate on discrete acoustic codes, which differ fundamentally from text tokens due to their multicodebook structure. At each timestep, models must predict N codebook entries jointly, introducing dependencies that challenge simple parallel prediction approaches. Parallel prediction assumes independence among codebooks, yielding efficient decoding but often at the cost of reduced fidelity. To address this, hierarchical strategies employ a local transformer (LT) to refine predictions and capture intra-timestep dependencies. In this work, we systematically investigate two LT architectures: an autoregressive transformer that generates codebooks sequentially, and a MaskGIT-based transformer that performs iterative masked prediction. Both designs further enable frame stacking, where the primary transformer predicts multiple frames jointly, and the LT decodes their codebooks, offering improvements in speed without compromising perceptual quality. Through extensive analysis, we characterize the tradeoffs between parallel and iterative sampling strategies across different throughput and quality regimes. Finally, we propose practical guidelines for selecting decoding strategies based on deployment priorities such as computational efficiency and synthesis fidelity.
Problem

Research questions and friction points this paper is trying to address.

Addressing dependencies in multi-codebook speech token prediction
Improving efficiency while maintaining perceptual speech quality
Optimizing tradeoffs between parallel and iterative decoding strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Frame-stacked local transformers capture dependencies
Autoregressive and MaskGIT transformers refine predictions
Joint multi-frame prediction improves speed quality
🔎 Similar Papers
No similar papers found.