🤖 AI Summary
This work addresses privacy leakage when using e-values for statistical inference and risk control under sensitive data scenarios. We propose the first general framework to transform any non-private e-value into a differentially private (DP) e-value, supporting arbitrary stopping times and post-hoc valid inference. Our core innovation is a bias-corrected multiplicative noise mechanism that preserves the strong statistical power of e-values under strict ε-DP guarantees, achieving asymptotic performance approaching that of the non-private counterpart. The method integrates DP mechanisms, online monitoring, conformal prediction, and hypothesis testing. Experiments on online risk monitoring, healthcare analytics, and conformal e-prediction demonstrate that our approach significantly outperforms existing DP-e-value methods—delivering both high statistical efficacy and rigorous privacy protection.
📝 Abstract
E-values have gained prominence as flexible tools for statistical inference and risk control, enabling anytime- and post-hoc-valid procedures under minimal assumptions. However, many real-world applications fundamentally rely on sensitive data, which can be leaked through e-values. To ensure their safe release, we propose a general framework to transform non-private e-values into differentially private ones. Towards this end, we develop a novel biased multiplicative noise mechanism that ensures our e-values remain statistically valid. We show that our differentially private e-values attain strong statistical power, and are asymptotically as powerful as their non-private counterparts. Experiments across online risk monitoring, private healthcare, and conformal e-prediction demonstrate our approach's effectiveness and illustrate its broad applicability.