Published several papers and technical reports, including 'Large Reasoning Models Learn Better Alignment from Flawed Thinking', 'Shape it Up! Restoring LLM Safety during Finetuning', 'Llama Guard 3 Vision: Safeguarding Human-AI Image Understanding Conversations', and more. Several papers have been accepted by top conferences such as NeurIPS and ICLR.
Research Experience
Currently a research scientist at Meta Superintelligence Labs, responsible for system-level safety and involved in pre-training and post-training safety. Served as a core contributor in multiple projects.
Education
Ph.D. in Computer Science, University of Virginia, 2022.
Background
A research scientist at Meta Superintelligence Labs, working on LLM alignment and reasoning. Obtained a Ph.D. in Computer Science from the University of Virginia in 2022, with a focus on Machine Learning and Natural Language Processing, particularly ML/AI safety.
Miscellany
Personal interests and other information not provided.