Speaker Disentanglement of Speech Pre-trained Model Based on Interpretability

📅 2025-07-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of disentangling speaker identity (timbre) from linguistic content in speech pre-trained models, where timbre removal often degrades semantic integrity. We propose InterpTF-SptME, a method for timbre-agnostic representation learning, and InterpTRQE-SptME, a novel evaluation benchmark. For the first time, we employ Gradient SHAP Explainer to quantitatively measure residual timbre in intermediate representations and introduce model-agnostic timbre filtering via SHAP Noise and SHAP Cropping. By fusing content- and timbre-specific embeddings to reconstruct representations, our approach achieves near-complete timbre removal—reducing residual timbre from 18.05% to ≈0% on the VCTK dataset using HuBERT LARGE—while substantially improving downstream performance on ASR and text classification tasks. The method preserves linguistic fidelity and simultaneously enhances speaker privacy protection.

Technology Category

Application Category

📝 Abstract
Speech pretrained models contain task-specific information across different layers, but decoupling content and timbre information remains challenging as removing speaker-specific information often causes content loss. Current research lacks direct metrics to quantify timbre residual in model encodings, relying on indirect evaluation through downstream tasks. This paper addresses these challenges through interpretability-based speaker disentanglement in speech pretraining models. We quantitatively evaluate timbre residual in model embeddings and improve speaker disentanglement using interpretive representations. Our contributions include: (1) InterpTRQE-SptME Benchmark - a timbre residual recognition framework using interpretability. The benchmark concatenates content embeddings with timbre embeddings for speaker classification, then applies Gradient SHAP Explainer to quantify timbre residual. We evaluate seven speech pretraining model variations. (2) InterpTF-SptME method - an interpretability-based timbre filtering approach using SHAP Noise and SHAP Cropping techniques. This model-agnostic method transforms intermediate encodings to remove timbre while preserving content. Experiments on VCTK dataset with HuBERT LARGE demonstrate successful content preservation and significant speaker disentanglement optimization. Results show the SHAP Noise method can reduce timbre residual from 18.05% to near 0% while maintaining content integrity, contributing to enhanced performance in content-related speech processing tasks and preventing timbre privacy leakage.
Problem

Research questions and friction points this paper is trying to address.

Decouple content and timbre in speech pretrained models
Quantify timbre residual in model embeddings directly
Improve speaker disentanglement using interpretability techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

Interpretability-based speaker disentanglement in speech models
SHAP Explainer quantifies timbre residual in embeddings
SHAP Noise method reduces timbre residual near zero
🔎 Similar Papers
No similar papers found.