Max Bartolo
Scholar

Max Bartolo

Google Scholar ID: jPSWYn4AAAAJ
Google DeepMind, UCL
NLPMachine LearningLLMsRobustness
Citations & Impact
All-time
Citations
4,322
 
H-index
19
 
i10-index
24
 
Publications
20
 
Co-authors
90
list available
Resume (English only)
Academic Achievements
  • Published 'Models in the Loop: Aiding Crowdworkers with Generative Annotation Assistants' at NAACL 2022; 'Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity' was selected as an outstanding paper at ACL 2022; 'Winoground: Probing Vision and Language Models for Visio-Linguistic Compositionality' at CVPR 2022; 'Dynatask: A Framework for Creating Dynamic AI Benchmark Tasks' at ACL Demo Track 2022.
Research Experience
  • Developed and taught the MSIN0221 Natural Language Processing module at UCL SoM; Interned at DeepMind with Po-Sen Huang and Johannes Welbl; Collaborated with Facebook AI Research (FAIR) under Douwe Kiela and Robin Jia on dynamic adversarial data collection, improving model robustness, and using generative assistants to improve annotation; Worked as a Machine Learning Engineer at Bloomsbury AI.
Education
  • PhD supervised by Pontus Stenetorp and Sebastian Riedel with the UCL NLP group; Master's degree from the UCL Department of Computer Science; Bachelor's degree in Mechanical Engineering from the University of Malta.
Background
  • Currently a researcher at Cohere, leading the Command team and serving as a working group co-chair for Dynabench at MLCommons. His research focuses on the robustness and reasoning of Large Language Models (LLMs).
Miscellany
  • Gave invited talks including on the application of LLMs for enterprise at Oracle AI@Molitor event; NLP applications and large language models to the Capital Enterprise startup network; and on dynamic adversarial data collection for large language models at the UCL AI Centre seminar on the present and future of large language models in theory and practice.