🤖 AI Summary
This work addresses the limitations of existing concept erasure methods, which struggle to defend against nonlinear attacks and lack rigorous quantification of the trade-off between utility preservation and privacy. The authors propose Obliviator, a post-processing concept erasure framework grounded in a functional perspective. By iteratively optimizing kernel combinations to reshape the feature space, Obliviator explicitly models and eliminates nonlinear statistical dependencies between sensitive attributes and representations. Notably, it formulates concept erasure as an optimization problem in function space—a first in the field—thereby achieving strong protection against nonlinear adversaries while maintaining downstream task utility. The method further reveals the dynamic trade-off between the cost of nonlinear protection and utility retention. Empirical results demonstrate that Obliviator significantly outperforms baseline approaches on the utility–erasure trade-off curve and exhibits superior generalization in disentangled representations.
📝 Abstract
Concept erasure aims to remove unwanted attributes, such as social or demographic factors, from learned representations, while preserving their task-relevant utility. While the goal of concept erasure is protection against all adversaries, existing methods remain vulnerable to nonlinear ones. This vulnerability arises from their failure to fully capture the complex, nonlinear statistical dependencies between learned representations and unwanted attributes. Moreover, although the existence of a trade-off between utility and erasure is expected, its progression during the erasure process, i.e., the cost of erasure, remains unstudied. In this work, we introduce Obliviator, a post-hoc erasure method designed to fully capture nonlinear statistical dependencies. We formulate erasure from a functional perspective, leading to an optimization problem involving a composition of kernels that lacks a closed-form solution. Instead of solving this problem in a single shot, we adopt an iterative approach that gradually morphs the feature space to achieve a more utility-preserving erasure. Unlike prior methods, Obliviator guards unwanted attribute against nonlinear adversaries. Our gradual approach quantifies the cost of nonlinear guardedness and reveals the dynamics between attribute protection and utility-preservation over the course of erasure. The utility-erasure trade-off curves obtained by Obliviator outperform the baselines and demonstrate its strong generalizability: its erasure becomes more utility-preserving when applied to the better-disentangled representations learned by more capable models.