WhiSPA: Semantically and Psychologically Aligned Whisper with Self-Supervised Contrastive and Student-Teacher Learning

📅 2025-01-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing speech encoders rely on external text-based language models (LMs) to obtain semantic representations, leading to redundancy and representation mismatch. This paper proposes an end-to-end speech representation learning framework that eliminates the need for a post-hoc LM. We deeply reconfigure Whisper’s internal language module to directly output compact, multimodal representations aligned with both semantic and psychological dimensions—namely emotion and personality. To achieve this, we innovatively integrate SBERT-based semantic distillation with psychologically grounded lexical embeddings, and design a contrastive teacher–student self-supervised training paradigm grounded in LM-derived textual representations. Evaluated on downstream emotion recognition and psychological assessment tasks, our method reduces average error by 73.4% and 83.8%, respectively, outperforming state-of-the-art speech encoders. To our knowledge, this is the first work to embed intrinsic psychological perception capability within a speech encoder, establishing a new paradigm for low-latency, high-fidelity speech understanding.

Technology Category

Application Category

📝 Abstract
Current speech encoding pipelines often rely on an additional text-based LM to get robust representations of human communication, even though SotA speech-to-text models often have a LM within. This work proposes an approach to improve the LM within an audio model such that the subsequent text-LM is unnecessary. We introduce WhiSPA (Whisper with Semantic and Psychological Alignment), which leverages a novel audio training objective: contrastive loss with a language model embedding as a teacher. Using over 500k speech segments from mental health audio interviews, we evaluate the utility of aligning Whisper's latent space with semantic representations from a text autoencoder (SBERT) and lexically derived embeddings of basic psychological dimensions: emotion and personality. Over self-supervised affective tasks and downstream psychological tasks, WhiSPA surpasses current speech encoders, achieving an average error reduction of 73.4% and 83.8%, respectively. WhiSPA demonstrates that it is not always necessary to run a subsequent text LM on speech-to-text output in order to get a rich psychological representation of human communication.
Problem

Research questions and friction points this paper is trying to address.

Improve LM within audio models
Align Whisper with semantic representations
Enhance psychological representation accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Contrastive loss with LM embedding
Semantic and psychological alignment
Self-supervised student-teacher learning
🔎 Similar Papers
No similar papers found.
R
Rajath Rao
Stony Brook University
Adithya V Ganesan
Adithya V Ganesan
Stony Brook University
Natural Language ProcessingComputational Social Science
O
O. Kjell
Stony Brook University
J
Jonah Luby
Stony Brook University
A
Akshay Raghavan
Stony Brook University
S
Scott Feltman
Stony Brook University
W
Whitney Ringwald
University of Minnesota
Ryan L. Boyd
Ryan L. Boyd
Department of Psychology, University of Texas at Dallas
computational social sciencetext analysissocial/personality psychologybehavioremotion
B
Benjamin Luft
Stony Brook University
C
C. Ruggero
University of Texas at Dallas
Neville Ryant
Neville Ryant
University of Pennsylvania
Automatic Speech RecognitionDeep Neural NetworksMachine LearningLinguistics
Roman Kotov
Roman Kotov
Professor of Psychiatry, Stony Brook University
Psychiatric ClassificationPersonalityLongitudinal Studies of Mental Health
H
H. A. Schwartz
Stony Brook University