Invited talks at top venues and institutions including ICML, NeurIPS, ACL, Stanford, Harvard, Oxford, NVIDIA, and EPFL.
Research Experience
Postdoctoral researcher at the Allen Institute for AI and the University of Washington, advised by Prof. Hanna Hajishirzi.
Research intern at Google.
Core contributor to the Tulu and Open-Instruct projects, developing post-training pipelines involving supervised fine-tuning, direct preference optimization, and reinforcement learning with verifiable rewards.
Contributed to open-source LLM projects OLMo and OLMo2.
Co-organized workshops including SoLaR (Socially Responsible Language Modelling Research) and UnImplicit.
Education
PhD in Computer Science from Bar Ilan University, NLP Lab, supervised by Prof. Ido Dagan and Prof. Reut Tsarfaty.
MSc from the University of Edinburgh.
BA from the University of Zurich.
Visited UW NLP as a PhD student under the supervision of Prof. Yejin Choi.
Completed two research internships at the Allen Institute for AI.
Background
Currently a Postdoctoral Researcher (and Young Investigator) at the Allen Institute for AI and the University of Washington.
Research focuses on developing generative AI that is contextually robust, responsible, and open.
Particularly interested in extending language model capabilities through post-training and adaptation.
Contributed to widely-used benchmarks such as RewardBench.
Research areas include: open science of LLMs and post-training, steerability, underspecification, and precise contextual response.