WavShape: Information-Theoretic Speech Representation Learning for Fair and Privacy-Aware Audio Processing

📅 2025-06-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Speech embeddings often inadvertently encode sensitive attributes—such as speaker identity and accent—posing risks to fairness and privacy. To address this, we propose an information-theoretic framework for fair speech representation learning: leveraging the Donsker–Varadhan lower bound on mutual information, we design a differentiable sensitive-attribute filtering mechanism that explicitly minimizes the mutual information between embeddings and sensitive variables during encoder training, while simultaneously maximizing mutual information with task-relevant labels. Crucially, our method operates without requiring labeled sensitive attributes and is fully compatible with self-supervised speech modeling. Experiments across three standard benchmarks demonstrate that our approach reduces mutual information with sensitive attributes by up to 81%, while preserving 97% of task-relevant information. This yields substantial improvements in both fairness—measured via reduced demographic bias—and privacy—measured via decreased attribute leakage—without compromising downstream task performance.

Technology Category

Application Category

📝 Abstract
Speech embeddings often retain sensitive attributes such as speaker identity, accent, or demographic information, posing risks in biased model training and privacy leakage. We propose WavShape, an information-theoretic speech representation learning framework that optimizes embeddings for fairness and privacy while preserving task-relevant information. We leverage mutual information (MI) estimation using the Donsker-Varadhan formulation to guide an MI-based encoder that systematically filters sensitive attributes while maintaining speech content essential for downstream tasks. Experimental results on three known datasets show that WavShape reduces MI between embeddings and sensitive attributes by up to 81% while retaining 97% of task-relevant information. By integrating information theory with self-supervised speech models, this work advances the development of fair, privacy-aware, and resource-efficient speech systems.
Problem

Research questions and friction points this paper is trying to address.

Speech embeddings retain sensitive attributes risking bias and privacy
Propose WavShape to optimize fairness and privacy in embeddings
Reduce mutual information of sensitive attributes while preserving task data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Information-theoretic speech representation learning framework
Mutual information-based encoder filters sensitive attributes
Integrates self-supervised models for fair privacy-aware systems
🔎 Similar Papers
No similar papers found.
O
Oguzhan Baser
Department of Electrical and Computer Engineering, The University of Texas at Austin, USA
Ahmet Ege Tanriverdi
Ahmet Ege Tanriverdi
Undergraduate Student, Bogazici University
Representation LearningDeep LearningOptimization TheoryStatistical Inference
Kaan Kale
Kaan Kale
Undergraduate Student, Bogazici University
S
Sandeep P. Chinchali
Department of Electrical and Computer Engineering, The University of Texas at Austin, USA
Sriram Vishwanath
Sriram Vishwanath
MITRE
Information & Coding TheoryCommunications/NetworkingBlockchains/CryptoAI/ML/Data Science