π€ AI Summary
Existing speech tokenizers employ fixed frame rates (e.g., 40 Hz), ignoring the temporal non-uniformity of speech information density, leading to inefficient representations. To address this, we propose VARSTokβthe first end-to-end variable-frame-rate speech tokenizer. It performs adaptive speech segmentation via temporal-aware density peak clustering and unifies content representation with implicit duration encoding within a single token index, eliminating the need for auxiliary duration prediction modules. VARSTok generates fine-grained tokens in information-dense regions while automatically sparsifying tokenization in stationary segments, thereby enhancing modeling efficiency and generalization. Experiments demonstrate that, using 23% fewer tokens, VARSTok achieves superior speech reconstruction naturalness over fixed-frame-rate baselines; it also reduces word error rate in zero-shot text-to-speech and significantly improves subjective audio quality.
π Abstract
Existing speech tokenizers typically assign a fixed number of tokens per second, regardless of the varying information density or temporal fluctuations in the speech signal. This uniform token allocation mismatches the intrinsic structure of speech, where information is distributed unevenly over time. To address this, we propose VARSTok, a VAriable-frame-Rate Speech Tokenizer that adapts token allocation based on local feature similarity. VARSTok introduces two key innovations: (1) a temporal-aware density peak clustering algorithm that adaptively segments speech into variable-length units, and (2) a novel implicit duration coding scheme that embeds both content and temporal span into a single token index, eliminating the need for auxiliary duration predictors. Extensive experiments show that VARSTok significantly outperforms strong fixed-rate baselines. Notably, it achieves superior reconstruction naturalness while using up to 23% fewer tokens than a 40 Hz fixed-frame-rate baseline. VARSTok further yields lower word error rates and improved naturalness in zero-shot text-to-speech synthesis. To the best of our knowledge, this is the first work to demonstrate that a fully dynamic, variable-frame-rate acoustic speech tokenizer can be seamlessly integrated into downstream speech language models. Speech samples are available at https://zhengrachel.github.io/VARSTok.