Valentina Pyatkin
Scholar

Valentina Pyatkin

Google Scholar ID: E9EgKkMAAAAJ
Allen Institute for AI & University of Washington
NLPGenerative AILanguage ModelingResponsible AIML
Citations & Impact
All-time
Citations
3,026
 
H-index
19
 
i10-index
29
 
Publications
20
 
Co-authors
129
list available
Resume (English only)
Academic Achievements
  • ACL Outstanding Paper Award.
  • ACL Best Theme Paper Award.
  • AI2 Outstanding Intern of the Year Award.
  • Two paper awards at ACL 2024.
  • Two papers accepted to ICML 2025.
  • Tutorial Chair for EMNLP 2025.
  • Internal Communication Chair for ACL 2024.
  • Invited talks at top venues and institutions including ICML, NeurIPS, ACL, Stanford, Harvard, Oxford, NVIDIA, and EPFL.
Research Experience
  • Postdoctoral researcher at the Allen Institute for AI and the University of Washington, advised by Prof. Hanna Hajishirzi.
  • Research intern at Google.
  • Core contributor to the Tulu and Open-Instruct projects, developing post-training pipelines involving supervised fine-tuning, direct preference optimization, and reinforcement learning with verifiable rewards.
  • Contributed to open-source LLM projects OLMo and OLMo2.
  • Co-organized workshops including SoLaR (Socially Responsible Language Modelling Research) and UnImplicit.
Education
  • PhD in Computer Science from Bar Ilan University, NLP Lab, supervised by Prof. Ido Dagan and Prof. Reut Tsarfaty.
  • MSc from the University of Edinburgh.
  • BA from the University of Zurich.
  • Visited UW NLP as a PhD student under the supervision of Prof. Yejin Choi.
  • Completed two research internships at the Allen Institute for AI.
Background
  • Currently a Postdoctoral Researcher (and Young Investigator) at the Allen Institute for AI and the University of Washington.
  • Research focuses on developing generative AI that is contextually robust, responsible, and open.
  • Particularly interested in extending language model capabilities through post-training and adaptation.
  • Contributed to widely-used benchmarks such as RewardBench.
  • Research areas include: open science of LLMs and post-training, steerability, underspecification, and precise contextual response.