🤖 AI Summary
Speech embeddings often inadvertently encode sensitive attributes—such as speaker identity and accent—posing risks to fairness and privacy. To address this, we propose an information-theoretic framework for fair speech representation learning: leveraging the Donsker–Varadhan lower bound on mutual information, we design a differentiable sensitive-attribute filtering mechanism that explicitly minimizes the mutual information between embeddings and sensitive variables during encoder training, while simultaneously maximizing mutual information with task-relevant labels. Crucially, our method operates without requiring labeled sensitive attributes and is fully compatible with self-supervised speech modeling. Experiments across three standard benchmarks demonstrate that our approach reduces mutual information with sensitive attributes by up to 81%, while preserving 97% of task-relevant information. This yields substantial improvements in both fairness—measured via reduced demographic bias—and privacy—measured via decreased attribute leakage—without compromising downstream task performance.
📝 Abstract
Speech embeddings often retain sensitive attributes such as speaker identity, accent, or demographic information, posing risks in biased model training and privacy leakage. We propose WavShape, an information-theoretic speech representation learning framework that optimizes embeddings for fairness and privacy while preserving task-relevant information. We leverage mutual information (MI) estimation using the Donsker-Varadhan formulation to guide an MI-based encoder that systematically filters sensitive attributes while maintaining speech content essential for downstream tasks. Experimental results on three known datasets show that WavShape reduces MI between embeddings and sensitive attributes by up to 81% while retaining 97% of task-relevant information. By integrating information theory with self-supervised speech models, this work advances the development of fair, privacy-aware, and resource-efficient speech systems.