Paper 'Membership Inference Attacks on Finetuned Diffusion Language Models' under review; Paper 'Window-based Membership Inference Attacks Against Fine-tuned Large Language Models' under review.
Research Experience
Designs and evaluates membership inference attacks to quantify privacy leakage in generative AI systems; researches computational creativity (controllable generation and multi-modal human-robot interaction); researches adversarial defenses (against prompt-level jailbreaks and data poisoning). Presented a poster at the Gameful And Immersive Learning Symposium and gave a research presentation at the RPI Undergraduate Research Fair 2023.
Education
Ph.D. Candidate in Computer Science at Purdue University, Advisor: Prof. Ninghui Li.
Background
Research interests: Privacy and Security for Large Language Models. His work involves designing and evaluating membership inference attacks to quantify privacy leakage in generative AI systems, as well as developing mitigation strategies that balance model utility with data confidentiality. Additionally, his research encompasses computational creativity (controllable generation and multi-modal human-robot interaction) and adversarial defenses (against prompt-level jailbreaks and data poisoning).
Miscellany
Excited to join Purdue University and begin his Ph.D. journey in Computer Science.