Paper 'Bayesian Estimation of Differential Privacy' accepted at ICML 2023
Paper 'Analyzing Leakage of Personally Identifiable Information in Language Models' accepted at Oakland 2023
Paper 'SoK: Let the Privacy Games Begin! A Unified Treatment of Data Inference Privacy in Machine Learning' accepted at Oakland 2023
Paper 'Two-in-One: A Model Hijacking Attack Against Text Generation Models' accepted at USENIX Security 2023
Paper 'UnGANable: Defending Against GAN-based Face Manipulation' accepted at USENIX Security 2023
Paper 'Get a Model! Model Hijacking Attack Against Machine Learning Models' accepted at NDSS 2022
Paper 'ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models' accepted at USENIX Security 2022
Paper 'BadNL: Backdoor Attacks against NLP Models with Semantic-preserving Improvements' accepted at ACSAC 2021
Paper 'Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning' accepted at USENIX Security 2020
Paper 'MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples' accepted at CCS 2019
Paper 'ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models' accepted at NDSS 2019
Paper 'Privacy-preserving Similar Patient Queries for Combined Biomedical Data' accepted at PoPETs 2019
Published multiple technical reports including 'Dynamic Backdoor Attacks Against Machine Learning Models' and 'MLCapsule: Guarded Offline Deployment of Machine Learning as a Service'