Paper 'Can Watermarking Large Language Models Prevent Copyrighted Text Generation and Hide Training Data?' won Best Paper Award at NeurIPS 2024 AdvML-Frontiers Workshop
1 paper accepted at ICLR 2025, 2 at NAACL 2025, and 2 at COLM 2024
Organized the ICLR 2025 workshop 'Building Trust in Language Models and Applications'
Led the NeurIPS 2024 competition 'Erasing the Invisible: A Stress-Test Challenge for Image Watermarks'
Published multiple papers on AI safety, watermarking, robustness, and fairness at top-tier venues including ACL, ICML, ICLR, COLM, and AAAI
Background
Fifth-year PhD candidate in Computer Science at University of Maryland
Research focus: Responsible AI
Particularly interested in safety, alignment, robustness, fairness, and interpretability of Generative AI
Member of UMIACS (University of Maryland Institute for Advanced Computer Studies)