Achieving Hilbert-Schmidt Independence Under Rényi Differential Privacy for Fair and Private Data Generation

📅 2025-08-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods for synthesizing heterogeneous tabular data in sensitive domains (e.g., healthcare) struggle to simultaneously guarantee differential privacy, task-agnostic fairness, and high data utility. Method: We propose FLIP—a Transformer-based architecture integrating variational autoencoders with latent diffusion, trained under Rényi Differential Privacy (RDP) constraints. FLIP introduces a novel Centered Kernel Alignment (CKA)-based inter-group alignment mechanism in latent space to enforce statistical independence between sensitive attributes and latent representations—first achieved within privacy-preserving training. It further employs an RDP-compatible balanced sampling strategy to jointly optimize privacy budget, fairness, and utility. Results: Experiments demonstrate that FLIP significantly improves task-agnostic fairness across multiple downstream tasks (+12.7% on average) while strictly satisfying ε ≤ 2 privacy guarantees. FLIP establishes a new paradigm for privacy-preserving synthetic data generation.

Technology Category

Application Category

📝 Abstract
As privacy regulations such as the GDPR and HIPAA and responsibility frameworks for artificial intelligence such as the AI Act gain traction, the ethical and responsible use of real-world data faces increasing constraints. Synthetic data generation has emerged as a promising solution to risk-aware data sharing and model development, particularly for tabular datasets that are foundational to sensitive domains such as healthcare. To address both privacy and fairness concerns in this setting, we propose FLIP (Fair Latent Intervention under Privacy guarantees), a transformer-based variational autoencoder augmented with latent diffusion to generate heterogeneous tabular data. Unlike the typical setup in fairness-aware data generation, we assume a task-agnostic setup, not reliant on a fixed, defined downstream task, thus offering broader applicability. To ensure privacy, FLIP employs Rényi differential privacy (RDP) constraints during training and addresses fairness in the input space with RDP-compatible balanced sampling that accounts for group-specific noise levels across multiple sampling rates. In the latent space, we promote fairness by aligning neuron activation patterns across protected groups using Centered Kernel Alignment (CKA), a similarity measure extending the Hilbert-Schmidt Independence Criterion (HSIC). This alignment encourages statistical independence between latent representations and the protected feature. Empirical results demonstrate that FLIP effectively provides significant fairness improvements for task-agnostic fairness and across diverse downstream tasks under differential privacy constraints.
Problem

Research questions and friction points this paper is trying to address.

Generating fair synthetic tabular data under privacy constraints
Ensuring statistical independence from protected features in latent space
Achieving task-agnostic fairness without predefined downstream tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer-based variational autoencoder with latent diffusion
Rényi differential privacy constraints during training
Centered Kernel Alignment for latent space fairness
🔎 Similar Papers
No similar papers found.