Honored to help (and have helped) organize flagship research conferences and workshops in probabilistic ML, including AABI and AISTATS. Delivered tutorials and talks such as NeurIPS 2020 on 'Advances in Approximate Inference' (with Cheng Zhang), ProbAI 2022 on 'Introduction to Bayesian Neural Networks', GeMSS 2023 on 'Sequential Generative Models', AAAI 2023 new faculty highlight on 'Robust and Adaptive Deep Learning via Bayesian Principles', and UAI 2025 on 'Modern Approximate Inference: Variational Methods and Beyond' (with Diana Cai).
Research Experience
Currently an Associate Professor in Machine Learning at the Department of Computing, Imperial College London. Since Mar 2024, also a Turing Fellow at The Alan Turing Institute. Spent 2.5 years as a senior researcher at Microsoft Research Cambridge before returning to academia.
Education
PhD in Machine Learning from the University of Cambridge, supervised by Prof. Richard E. Turner. Member of Darwin College.
Background
Interested in building reliable machine learning systems that can generalize to unseen environments. Research interests include: (deep) probabilistic graphical model design, fast and accurate (Bayesian) inference/computation techniques, uncertainty quantification for computation and downstream tasks, robust and adaptive machine learning systems. Also interested in transfer/meta learning, information theory, optimization, and sequential data modeling.
Miscellany
Contact: firstname.lastname [at] imperial [dot] ac [dot] uk or liyzhen2 [at] gmail [dot] com.