ParaSpeechCLAP: A Dual-Encoder Speech-Text Model for Rich Stylistic Language-Audio Pretraining

📅 2026-03-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing speech style modeling approaches, which struggle with diverse textual style descriptions due to constrained style dimensions. To overcome this, the authors propose a dual-encoder contrastive learning framework that maps both speech and textual style descriptions into a unified embedding space, enabling fine-grained modeling at both speaker and utterance levels. The framework incorporates dedicated and joint encoder architectures, classification-based auxiliary losses, category-balanced sampling, and multi-task pretraining to substantially enhance modeling capacity for both individual and composite style attributes. Experimental results demonstrate consistent superiority over baseline methods across three tasks: style retrieval, attribute classification, and serving as a reward model in text-to-speech synthesis. Notably, the learned representations improve controllable speech synthesis without requiring task-specific fine-tuning.
📝 Abstract
We introduce ParaSpeechCLAP, a dual-encoder contrastive model that maps speech and text style captions into a common embedding space, supporting a wide range of intrinsic (speaker-level) and situational (utterance-level) descriptors (such as pitch, texture and emotion) far beyond the narrow set handled by existing models. We train specialized ParaSpeechCLAP-Intrinsic and ParaSpeechCLAP-Situational models alongside a unified ParaSpeechCLAP-Combined model, finding that specialization yields stronger performance on individual style dimensions while the unified model excels on compositional evaluation. We further show that ParaSpeechCLAP-Intrinsic benefits from an additional classification loss and class-balanced training. We demonstrate our models' performance on style caption retrieval, speech attribute classification and as an inference-time reward model that improves style-prompted TTS without additional training. ParaSpeechCLAP outperforms baselines on most metrics across all three applications. Our models and code are released at https://github.com/ajd12342/paraspeechclap .
Problem

Research questions and friction points this paper is trying to address.

speech-text pretraining
stylistic attributes
dual-encoder model
style representation
audio-language alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

dual-encoder contrastive learning
stylistic speech-text alignment
rich audio attribute modeling
style-prompted TTS reward
class-balanced training