Nonlinear Concept Erasure: a Density Matching Approach

๐Ÿ“… 2025-07-16
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This paper addresses the problem of irreversibly erasing sensitive demographic attributes (e.g., gender, race) from neural representations while preserving semantic meaning to improve fairness in downstream NLP tasks. To this end, we propose LEOPARD: a density-matching-based orthogonal projection learning framework that achieves efficient, structure-preserving erasure of discrete concepts in nonlinear embedding spaces. LEOPARD explicitly aligns class-conditional feature distributions and controls projection rank to ensure both geometric fidelity and erasure rigor. Its key contribution lies in formulating concept erasure as a distribution alignment problem with local geometric constraints, thereby balancing strict attribute removal and high semantic retention. Evaluated on multiple NLP benchmarks, LEOPARD significantly outperforms state-of-the-art methodsโ€”achieving superior bias mitigation under deep nonlinear classifiers while maintaining competitive task performance.

Technology Category

Application Category

๐Ÿ“ Abstract
Ensuring that neural models used in real-world applications cannot infer sensitive information, such as demographic attributes like gender or race, from text representations is a critical challenge when fairness is a concern. We address this issue through concept erasure, a process that removes information related to a specific concept from distributed representations while preserving as much of the remaining semantic information as possible. Our approach involves learning an orthogonal projection in the embedding space, designed to make the class-conditional feature distributions of the discrete concept to erase indistinguishable after projection. By adjusting the rank of the projector, we control the extent of information removal, while its orthogonality ensures strict preservation of the local structure of the embeddings. Our method, termed $overline{mathrm{L}}$EOPARD, achieves state-of-the-art performance in nonlinear erasure of a discrete attribute on classic natural language processing benchmarks. Furthermore, we demonstrate that $overline{mathrm{L}}$EOPARD effectively mitigates bias in deep nonlinear classifiers, thereby promoting fairness.
Problem

Research questions and friction points this paper is trying to address.

Remove sensitive info from neural text representations
Preserve semantic info while erasing specific concepts
Mitigate bias in deep nonlinear classifiers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Orthogonal projection for concept erasure
Density matching to erase discrete attributes
Adjustable rank controls information removal
๐Ÿ”Ž Similar Papers
No similar papers found.
A
Antoine Saillenfest
onepoint, 29 rue des Sablons, 75116 Paris (France)
Pirmin Lemberger
Pirmin Lemberger
onepoint
NLPmachine learningartificial intelligence