Recent publications: 'Can Large Language Models Reason about Program Invariants?' (International Conference on Machine Learning, 2023), 'Natural Language to Code Generation in Interactive Data Science Notebooks' (Proceedings of the Association of Computational Linguistics (ACL), 2023).
Research Experience
Currently a Research Scientist at Google DeepMind and an Honorary Fellow at the School of Informatics, University of Edinburgh. Was part of a large machine learning group at Edinburgh.
Background
Research interests include a broad range of applications of probabilistic methods for machine learning, including software engineering, natural language processing, computer security, queueing theory, and sustainable energy. These applications, although disparate, are connected by an underlying statistical methodology in probabilistic modelling and techniques for approximate inference in graphical models.