🤖 AI Summary
Existing speech encoders rely on external text-based language models (LMs) to obtain semantic representations, leading to redundancy and representation mismatch. This paper proposes an end-to-end speech representation learning framework that eliminates the need for a post-hoc LM. We deeply reconfigure Whisper’s internal language module to directly output compact, multimodal representations aligned with both semantic and psychological dimensions—namely emotion and personality. To achieve this, we innovatively integrate SBERT-based semantic distillation with psychologically grounded lexical embeddings, and design a contrastive teacher–student self-supervised training paradigm grounded in LM-derived textual representations. Evaluated on downstream emotion recognition and psychological assessment tasks, our method reduces average error by 73.4% and 83.8%, respectively, outperforming state-of-the-art speech encoders. To our knowledge, this is the first work to embed intrinsic psychological perception capability within a speech encoder, establishing a new paradigm for low-latency, high-fidelity speech understanding.
📝 Abstract
Current speech encoding pipelines often rely on an additional text-based LM to get robust representations of human communication, even though SotA speech-to-text models often have a LM within. This work proposes an approach to improve the LM within an audio model such that the subsequent text-LM is unnecessary. We introduce WhiSPA (Whisper with Semantic and Psychological Alignment), which leverages a novel audio training objective: contrastive loss with a language model embedding as a teacher. Using over 500k speech segments from mental health audio interviews, we evaluate the utility of aligning Whisper's latent space with semantic representations from a text autoencoder (SBERT) and lexically derived embeddings of basic psychological dimensions: emotion and personality. Over self-supervised affective tasks and downstream psychological tasks, WhiSPA surpasses current speech encoders, achieving an average error reduction of 73.4% and 83.8%, respectively. WhiSPA demonstrates that it is not always necessary to run a subsequent text LM on speech-to-text output in order to get a rich psychological representation of human communication.