🤖 AI Summary
This study addresses the challenge that existing speech encoders struggle to effectively preserve semantic information aligned with textual meaning in multimodal systems, thereby limiting downstream task performance. For the first time, this work explicitly disentangles and quantifies the semantic versus phonetic content captured by speech encoders, employing word-level probing tasks, hierarchical representation analysis, and cross-modal alignment metrics such as Centered Kernel Alignment (CKA) to systematically evaluate prominent models. The findings reveal that current encoders predominantly model phonetic information while inadequately capturing lexical-level semantic structure. This insight provides crucial empirical evidence and practical guidance for designing next-generation speech representations that jointly account for both phonetic and semantic content.
📝 Abstract
Speech tokenizers are essential for connecting speech to large language models (LLMs) in multimodal systems. These tokenizers are expected to preserve both semantic and acoustic information for downstream understanding and generation. However, emerging evidence suggests that what is termed"semantic"in speech representations does not align with text-derived semantics: a mismatch that can degrade multimodal LLM performance. In this paper, we systematically analyze the information encoded by several widely used speech tokenizers, disentangling their semantic and phonetic content through word-level probing tasks, layerwise representation analysis, and cross-modal alignment metrics such as CKA. Our results show that current tokenizers primarily capture phonetic rather than lexical-semantic structure, and we derive practical implications for the design of next-generation speech tokenization methods.