Jonathan Richard Schwarz
Scholar

Jonathan Richard Schwarz

Google Scholar ID: Efs3XxQAAAAJ
Thomson Reuters
Machine LearningStatisticsArtificial Intelligence
Citations & Impact
All-time
Citations
6,051
 
H-index
19
 
i10-index
22
 
Publications
20
 
Co-authors
13
list available
Resume (English only)
Academic Achievements
  • Published extensively in top-tier venues including ICLR, NeurIPS, ICML, ACL, EMNLP, JMLR, TMLR, CVPR, ECCV, and TACL.
  • Recent works (2023–2025) include Scales++ (efficient LLM evals), ACL'25 Oral on sparse MoE for LLM upcycling, Composable Interventions (ICLR'25), AI agents for scientific discovery (Cell), CoLoR-Filter and MAC (NeurIPS'24), SMAT (ICML'24).
  • Serving as Area Chair for EMNLP'25.
  • Invited to speak or participate in panels at The Royal Society, MICCAI 2024, and the Global Summit on Open Problems for AI.
  • Released a perspective paper on Empowering Scientific Discovery with AI Agents (April 2024).
Research Experience
  • Head of AI Research at Thomson Reuters, joined via acquisition of Safe Sign Technologies, where he was Co-Founder and Chief Scientific Officer (CSO).
  • Former Research Fellow at Harvard University.
  • Former Senior Research Scientist at Google DeepMind.
  • Organised the NeurIPS'24 workshop on Compositional Learning.
  • Serving as publicity chair for CoLLAs 2025.
  • Delivered guest lectures at institutions including the University of Virginia and HKUST.
Background
  • Visiting Professor at Imperial College London and Head of AI Research at Thomson Reuters (TR), leading TR's Foundational Research Team.
  • Serves as an Expert Advisor to the UK's AI Security Institute.
  • Research focuses on building (i) efficient, (ii) general, and (iii) robust Machine Learning systems.
  • Central paradigm: designing algorithms that abstract knowledge and skills from related problems to enable efficient transfer learning with reduced time/data requirements.
  • Key research areas include Sparsity & Efficient Parameterizations, Large Language Models (LLMs), Data-centric ML, Continual Learning, INRs / Neural Data Compression, and Meta-Learning.