๐ค AI Summary
This paper addresses the problem of irreversibly erasing sensitive demographic attributes (e.g., gender, race) from neural representations while preserving semantic meaning to improve fairness in downstream NLP tasks. To this end, we propose LEOPARD: a density-matching-based orthogonal projection learning framework that achieves efficient, structure-preserving erasure of discrete concepts in nonlinear embedding spaces. LEOPARD explicitly aligns class-conditional feature distributions and controls projection rank to ensure both geometric fidelity and erasure rigor. Its key contribution lies in formulating concept erasure as a distribution alignment problem with local geometric constraints, thereby balancing strict attribute removal and high semantic retention. Evaluated on multiple NLP benchmarks, LEOPARD significantly outperforms state-of-the-art methodsโachieving superior bias mitigation under deep nonlinear classifiers while maintaining competitive task performance.
๐ Abstract
Ensuring that neural models used in real-world applications cannot infer sensitive information, such as demographic attributes like gender or race, from text representations is a critical challenge when fairness is a concern. We address this issue through concept erasure, a process that removes information related to a specific concept from distributed representations while preserving as much of the remaining semantic information as possible. Our approach involves learning an orthogonal projection in the embedding space, designed to make the class-conditional feature distributions of the discrete concept to erase indistinguishable after projection. By adjusting the rank of the projector, we control the extent of information removal, while its orthogonality ensures strict preservation of the local structure of the embeddings. Our method, termed $overline{mathrm{L}}$EOPARD, achieves state-of-the-art performance in nonlinear erasure of a discrete attribute on classic natural language processing benchmarks. Furthermore, we demonstrate that $overline{mathrm{L}}$EOPARD effectively mitigates bias in deep nonlinear classifiers, thereby promoting fairness.