Winner of Best Paper Award at ICML 2024 for the work 'Probabilistic Inference in Language Models via Twisted Sequential Monte Carlo'; Published research on Variational Representations of Annealing Paths in the Information Geometry journal; Published Extended State Space Importance Sampling (applied to mutual information estimation) at ICLR 2022.
Research Experience
Postdoctoral Fellow at the Vector Institute in Toronto, working with Alireza Makhzani and Roger Grosse; Interned with the AI Safety Analysis team at DeepMind, working with Pedro Ortega and Tim Genewein.
Education
PhD from University of Southern California in 2022, working with Greg Ver Steeg and Aram Galstyan.
Background
Research interests include control-as-inference perspectives across recent language and diffusion modeling settings. Currently a Postdoctoral Fellow at the Vector Institute in Toronto.
Miscellany
Wrote a blog post reviewing control-as-inference perspectives.