- Predictability shapes adaptation: An evolutionary perspective on modes of learning in transformers
- On the generalization of language models from in-context learning and finetuning: A controlled study
Research Experience
Works as a research scientist at Google DeepMind, focusing on how the mind combines familiar parts to solve unfamiliar problems (compositionality), how those parts are represented and processed, and how these representations adapt to make solving familiar problems more efficient over time (automaticity). Also explores how insights from cognitive science can help evaluate the capabilities and limitations of frontier models.
Education
PhD student at Princeton University, advised by Tom Griffiths, Jon Cohen, and Mike Mozer.
Background
PhD candidate at Princeton and a research scientist at Google DeepMind. Research interests include continual learning, meta-learning, and resource-rationality. Focuses on the computational principles underlying the flexibility of human cognition and whether these principles extend to large language models.
Miscellany
Contact information includes Email, Google Scholar, GitHub, Twitter, and CV.