- Modeling rapid language learning by distilling Bayesian priors into artificial neural networks. Nature Communications.
- Embers of Autoregression: Understanding Large Language Models Through the Problem They are Trained to Solve. Proceedings of the National Academy of Sciences.
- Infinite use of finite means? Evaluating the generalization of center embedding learned from an artificial grammar. Proceedings of the 43rd Annual Conference of the Cognitive Science Society.
- Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference. ACL 2019.
- Does syntax need to grow on trees? Sources of hierarchical inductive bias in sequence-to-sequence networks. TACL.
Research includes neural network language models, with an emphasis on connecting such systems to linguistics. Investigates how people can acquire language from so little data and how this ability can be replicated in machines. Also explores what types of machines can represent the structure of language and how they do it.
Background
Assistant Professor in the Department of Linguistics at Yale University, also affiliated with the Department of Computer Science and the Wu Tsai Institute. Research focuses on computational linguistics using techniques from cognitive science and artificial intelligence. The research emphasizes the computational principles underlying human language, particularly interested in language learning and linguistic representations.
Miscellany
Conversation topics can be found on the 'Conversation topics' page. Currently considering postdoc applications for Fall 2026 and PhD applicants for Fall 2026. Not accepting Master’s students at this time.