2025: Contributed to the technical report 'Gemini 2.5: Pushing the Frontier with Advanced Reasoning, Multimodality, Long Context, and Next Generation Agentic Capabilities'
2025: Published 'Differentially Private Parameter-Efficient Fine-tuning for Large ASR Models' at Interspeech
2025: Published 'Enhancing Prompt Injection Attacks to LLMs via Poisoning Alignment' at ACM AISec
2024: Published 'AudioMarkBench: Benchmarking Robustness of Audio Watermarking' in NeurIPS Datasets and Benchmarks Track
2024: Published 'Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models' at USENIX Security Symposium
2024: Published 'Visual Hallucinations of Multi-modal Large Language Models' in ACL Findings
2024: Published 'Differentially Private Parameter-Efficient Fine-tuning for Large ASR Models' at IEEE S&P DLSP
2024: Published 'Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning' at IEEE S&P SAGAI
2024: Published 'CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive Learning' at CVPR
2024: Co-authored book chapter '10 Security and Privacy Problems in Large Foundation Models'
2023: Published 'Generation-based fuzzing? Don’t build a new generator, reuse!' in Computers & Security
2023: Published 'PointCert: Point Cloud Classification with Deterministic Certified Robustness Guarantees' at CVPR
2022: Published 'PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning' at USENIX Security Symposium
2022: Published 'Semi-Leak: Membership Inference Attacks Against Semi-supervised Learning' at ECCV
2022: Published 'StolenEncoder: Stealing Pre-trained Encoders in Self-supervised Learning' at ACM CCS
2022: Published 'Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations' at ICLR
2021: Published 'EncoderMI: Membership Inference against Pre-trained Encoders in Contrastive Learning' at ACM CCS
2021: Published 'PointGuard: Provably Robust 3D Point Cloud Classification' at CVPR
2021: Published 'On the Intrinsic Differential Privacy of Bagging' at IJCAI