Say More with Less: Variable-Frame-Rate Speech Tokenization via Adaptive Clustering and Implicit Duration Coding

πŸ“… 2025-09-04
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing speech tokenizers employ fixed frame rates (e.g., 40 Hz), ignoring the temporal non-uniformity of speech information density, leading to inefficient representations. To address this, we propose VARSTokβ€”the first end-to-end variable-frame-rate speech tokenizer. It performs adaptive speech segmentation via temporal-aware density peak clustering and unifies content representation with implicit duration encoding within a single token index, eliminating the need for auxiliary duration prediction modules. VARSTok generates fine-grained tokens in information-dense regions while automatically sparsifying tokenization in stationary segments, thereby enhancing modeling efficiency and generalization. Experiments demonstrate that, using 23% fewer tokens, VARSTok achieves superior speech reconstruction naturalness over fixed-frame-rate baselines; it also reduces word error rate in zero-shot text-to-speech and significantly improves subjective audio quality.

Technology Category

Application Category

πŸ“ Abstract
Existing speech tokenizers typically assign a fixed number of tokens per second, regardless of the varying information density or temporal fluctuations in the speech signal. This uniform token allocation mismatches the intrinsic structure of speech, where information is distributed unevenly over time. To address this, we propose VARSTok, a VAriable-frame-Rate Speech Tokenizer that adapts token allocation based on local feature similarity. VARSTok introduces two key innovations: (1) a temporal-aware density peak clustering algorithm that adaptively segments speech into variable-length units, and (2) a novel implicit duration coding scheme that embeds both content and temporal span into a single token index, eliminating the need for auxiliary duration predictors. Extensive experiments show that VARSTok significantly outperforms strong fixed-rate baselines. Notably, it achieves superior reconstruction naturalness while using up to 23% fewer tokens than a 40 Hz fixed-frame-rate baseline. VARSTok further yields lower word error rates and improved naturalness in zero-shot text-to-speech synthesis. To the best of our knowledge, this is the first work to demonstrate that a fully dynamic, variable-frame-rate acoustic speech tokenizer can be seamlessly integrated into downstream speech language models. Speech samples are available at https://zhengrachel.github.io/VARSTok.
Problem

Research questions and friction points this paper is trying to address.

Adaptive token allocation for variable information density speech
Eliminating auxiliary duration predictors with implicit coding
Seamless integration into downstream speech language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive clustering for variable-length speech units
Implicit duration coding in single token
Dynamic frame-rate integration without auxiliary predictors
πŸ”Ž Similar Papers
No similar papers found.
R
Rui-Chen Zheng
University of Science and Technology of China, Hefei, Anhui, China
Wenrui Liu
Wenrui Liu
Zhejiang University
time seriesmulti-modalLLM
H
Hui-Peng Du
University of Science and Technology of China, Hefei, Anhui, China
Q
Qinglin Zhang
Independent Researcher
Chong Deng
Chong Deng
alibaba group
machine learningnatural language processing
Q
Qian Chen
Independent Researcher
W
Wen Wang
Independent Researcher
Yang Ai
Yang Ai
Associate Researcher, University of Science and Technology of China
Speech SynthesisSpeech EnhancementSpeech CodingDeep Learning
Z
Zhen-Hua Ling
University of Science and Technology of China, Hefei, Anhui, China