๐ค AI Summary
How to preserve user privacy without compromising predictive performance? This paper proposes a privacy-aware representation learning framework based on adversarial training. Its core innovation is the focal entropy regularization term, which replaces conventional entropy regularization to more precisely suppress leakage of sensitive attributes. Integrated into an adversarial learning architecture, this mechanism enables the encoder to actively denoise and mask sensitive information, thereby substantially improving the privacyโutility trade-off. Extensive experiments on multiple benchmark datasets demonstrate that, under identical privacy budgets, the proposed method achieves an average 2.3% improvement in downstream task accuracy compared to state-of-the-art approaches, while reducing the success rate of sensitive attribute inference attacks to below 12.7%. These results validate both the effectiveness and practicality of the framework.
๐ Abstract
How can we learn a representation with high predictive power while preserving user privacy? We present an adversarial representation learning method for sanitizing sensitive content from the learned representation. Specifically, we introduce a variant of entropy - focal entropy, which mitigates the potential information leakage of the existing entropy-based approaches. We showcase feasibility on multiple benchmarks. The results suggest high target utility at moderate privacy leakage.