Scholar
Natasha Jaques
Google Scholar ID: 8iCb2TwAAAAJ
University of Washington, Google Research
Social reinforcement learning
Machine learning
deep learning
multi-agent
human-AI interaction
Follow
Homepage
↗
Google Scholar
↗
Citations & Impact
All-time
Citations
5,929
H-index
29
i10-index
40
Publications
20
Co-authors
16
list available
Contact
Email
natashamjaques@gmail.com
CV
Open ↗
Twitter
Open ↗
GitHub
Open ↗
Publications
23 items
How LLMs Distort Our Written Language
2026
Cited
0
Improving Interactive In-Context Learning from Natural Language Feedback
2026
Cited
0
Are Language Models Sensitive to Morally Irrelevant Distractors?
2026
Cited
0
AgenticRed: Optimizing Agentic Systems for Automated Red-teaming
2026
Cited
0
Evaluating Generalization Capabilities of LLM-Based Agents in Mixed-Motive Scenarios Using Concordia
2025
Cited
2
Generative Adversarial Post-Training Mitigates Reward Hacking in Live Human-AI Music Interaction
2025
Cited
0
RLVE: Scaling Up Reinforcement Learning for Language Models with Adaptive Verifiable Environments
2025
Cited
0
Consistently Simulating Human Personas with Multi-Turn Reinforcement Learning
2025
Cited
0
Load more
Resume (English only)
Academic Achievements
2023 Best Paper, AAAI Workshop on Representation Learning for Responsible Human-Centric AI
2021 Outstanding PhD Dissertation Award, Association for the Advancement of Affective Computing
2021 Best of Collection, IEEE Transactions on Affective Computing (impact factor: 10.5)
2020 Best Paper, NeurIPS Workshop on Cooperative AI
2019 Honorable Mention for Best Paper, ICML
2019 Rising Stars in EECS Pitch Competition Winner
2019 Best Paper Nominee, NeurIPS Workshop on Conversational AI
2017 Centennial Alumni of Distinction, Campion College
2016 Best Paper, NeurIPS Workshop on ML for Healthcare
2016 Best Demo, NeurIPS
Work featured in Science, MIT Technology Review, IEEE Spectrum, Quartz, National Geographic, Boston Magazine, CBC radio, and more
Research Experience
During PhD at MIT, developed RL-based language model fine-tuning and human feedback learning techniques later built upon by OpenAI’s RLHF work
Developed methods for improving multi-agent coordination through optimization of social influence
Interned at DeepMind and Google Brain; served as OpenAI Scholars Mentor
Visiting Postdoctoral Scholar in Sergey Levine’s group at UC Berkeley
As Senior Research Scientist at Google Brain, built adversarial environment generation methods to enhance RL agent robustness
Co-authors
16 total
Co-author 1
Co-author 2
Asma Ghandeharioun
Sr. Research Scientist, Google DeepMind
Douglas Eck
Google Research, Brain Team
Shixiang Shane Gu
Google DeepMind
Lynn H. Kaack
Hertie School
David Rolnick
McGill University, Mila Quebec AI Institute
Co-author 8
×
Welcome back
Sign in to Agora
Welcome back! Please sign in to continue.
Email address
Password
Forgot password?
Continue
Do not have an account?
Sign up