SoftAdaClip: A Smooth Clipping Strategy for Fair and Private Model Training

πŸ“… 2025-10-01
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
In differentially private (DP) training, hard gradient clipping in DP-SGD disproportionately attenuates learning signals from minority subgroups, degrading both model utility and fairness. To address this, we propose SoftAdaClipβ€”a soft adaptive gradient clipping method that replaces hard clipping with a tanh-based smooth transformation, preserving the relative gradient magnitudes across subgroups, and integrates adaptive sensitivity estimation to mitigate subgroup bias while strictly satisfying DP constraints. Experiments on MIMIC-III, GOSSIS-eICU, and Adult Income datasets demonstrate that SoftAdaClip reduces subgroup performance disparity by 87% and 48% compared to standard DP-SGD and Adaptive-DP-SGD, respectively, with statistical significance (p < 0.01). To our knowledge, this is the first work to introduce smooth clipping into DP training, achieving a principled balance among privacy guarantees, overall model utility, and subgroup fairness.

Technology Category

Application Category

πŸ“ Abstract
Differential privacy (DP) provides strong protection for sensitive data, but often reduces model performance and fairness, especially for underrepresented groups. One major reason is gradient clipping in DP-SGD, which can disproportionately suppress learning signals for minority subpopulations. Although adaptive clipping can enhance utility, it still relies on uniform hard clipping, which may restrict fairness. To address this, we introduce SoftAdaClip, a differentially private training method that replaces hard clipping with a smooth, tanh-based transformation to preserve relative gradient magnitudes while bounding sensitivity. We evaluate SoftAdaClip on various datasets, including MIMIC-III (clinical text), GOSSIS-eICU (structured healthcare), and Adult Income (tabular data). Our results show that SoftAdaClip reduces subgroup disparities by up to 87% compared to DP-SGD and up to 48% compared to Adaptive-DPSGD, and these reductions in subgroup disparities are statistically significant. These findings underscore the importance of integrating smooth transformations with adaptive mechanisms to achieve fair and private model training.
Problem

Research questions and friction points this paper is trying to address.

Addresses unfair gradient suppression in differentially private training
Replaces hard clipping with smooth transformation for fairness
Reduces subgroup performance disparities while maintaining privacy
Innovation

Methods, ideas, or system contributions that make the work stand out.

SoftAdaClip uses tanh-based smooth gradient transformation
It replaces hard clipping to preserve relative gradient magnitudes
Method integrates smooth transformations with adaptive mechanisms
πŸ”Ž Similar Papers
No similar papers found.
D
Dorsa Soleymani
Faculty of Computer Science, Dalhousie University, Vector Institute, Canada
A
Ali Dadsetan
Faculty of Computer Science, Dalhousie University, Vector Institute, Canada
Frank Rudzicz
Frank Rudzicz
Dalhousie University, Computer Science ; Vector Institute for Artificial Intelligence
Natural language processingmachine learninghealthcaresurgical safetybrain-computer