Published several papers including SIRAJ: Diverse and Efficient Red-Teaming for LLM Agents via Distilled Structured Reasoning, Jailbreak Distillation: Renewable Safety Benchmarking, Controllable Safety Alignment: Inference-Time Adaptation to Diverse Safety Requirements, and more. Received VLDB 2016 Best Paper Award.
Research Experience
Was a research intern at Microsoft Research and the Allen Institute for Artificial Intelligence (AI2). Spent summer 2015 at IBM Almaden Lab, working on compressed linear algebra for large-scale machine learning.
Education
Ph.D. in Computer Science from the University of Maryland (UMD) in 2021, advised by Jordan Boyd-Graber.
Background
Senior Researcher at Microsoft Research - AI Frontiers Lab, focusing on reasoning and agentic models. Previously part of Microsoft's Responsible AI team, where he focused on the safety alignment and evaluation of LLMs/Agents.