- Publication: Simfluence: Modeling the influence of individual training examples by simulating training runs, Kelvin Guu* et al., arXiv preprint, 2023
- Publication: RARR: Researching and revising what language models say, using language models, Luyu Gao* et al., ACL, 2023
- Publication: Finetuned language models are zero-shot learners, Jason Wei* et al., ICLR, 2023
Research Experience
- 2018 - Present: Senior Staff Research Scientist, Manager at Google DeepMind, developing new methods for machine learning and natural language processing
- 2018 - 2019: Lecturer at Stanford University, teaching core topics in artificial intelligence (CS221) to 700+ Stanford students
- Summer 2015: Ph.D. Software Engineering Intern at Google, worked with Jakob Uszkoreit on applied NLP projects
- Winter 2014/15: Research Consultant at MetaMind, worked with Founder & CEO/CTO Richard Socher
- 2011 - 2012: Researcher in Bayesian Statistics at Duke University, research with Professors David B. Dunson and Alex Hartemink
Education
- 2012 - 2018: Ph.D. in Statistics at Stanford University, Advisor: Percy Liang, Committee: Percy Liang, Wing Hung Wong, Chris Manning, Lester Mackey
- 2007 - 2011: B.S. in Mathematics at Duke University, Minor in Biology, Advisors: David B. Dunson and Alex Hartemink
Background
- Research Interests: Machine learning, deep learning, natural language processing, semantic parsing, reinforcement learning, statistics
- Applications of Interest: Voice user interfaces, natural language interfaces, machine learning in healthcare, recommender systems
- Background: Aims to make machine learning so cheap and easy that everyone can benefit from it. Excited about improving collective intelligence.