Evaluating natural language processing models with generalization metrics that do not need access to any training or testing data (2022, arXiv preprint)
The reproducing Stein kernel approach for post-hoc corrected sampling (2020, arXiv preprint)
Generalization Bounds using Lower Tail Exponents in Stochastic Optimizers (2022, ICML conference paper)
Geometric Rates of Convergence for Kernel-based Sampling Algorithms (2021, UAI conference paper)
Multiplicative noise and heavy tails in stochastic optimization (2021, ICML conference paper)
Shadow Manifold Hamiltonian Monte Carlo (2021, AISTATS conference paper)
Implicit Langevin Algorithms for Sampling From Log-concave Densities (2021, Journal of Machine Learning Research)
Research Experience
Previously, he was a postdoctoral research scholar at the University of California, Berkeley, and the International Computer Science Institute (ICSI), under the supervision of Michael Mahoney.
Education
He obtained his Ph.D. from the University of Queensland, supervised by Prof Philip Pollett and Dr Ross McVinish.
Background
A lecturer in Data Science at the School of Mathematics and Statistics at the University of Melbourne. His research interests lie primarily in probabilistic machine learning (including Bayesian methods), theory of deep learning, and robust training schemes for neural networks.