Paper 'Better Estimation of the KL Divergence Between Language Models' accepted at NeurIPS 2025; paper 'Variational Best-of-N Alignment' presented at ICLR 2025; received Qualcomm Innovation Fellowship Europe Reward in July 2024; gave a tutorial at ACL 2023 on Generating Text from Language Models in July 2023.
Research Experience
Successfully passed Ph.D. defense in September 2025; started a 3-month research internship at Allen Institute for AI (Ai2) in April 2025; spent a year (part-time) as a student researcher at Google Deepmind, Zurich starting October 2022; began Ph.D. studies at ETH AI Center in October 2021.
Education
Ph.D. from ETH AI Center, primarily supervised by Prof. Ryan Cotterell and secondarily by Prof. Elliott Ash; M.Sc. in Computer Science from ETH Zürich; B.Sc. in Computer Engineering (Software Engineering major) from Sharif University of Technology.
Background
Research interests include developing efficient and reliable methods for aligning language models. Currently a research scientist at Google DeepMind, focusing on Gemini post-training.