Matt MacDermott
Scholar

Matt MacDermott

Google Scholar ID: UHqQCmsAAAAJ
Imperial College London/Mila/LawZero
Artificial Intelligence
Citations & Impact
All-time
Citations
139
 
H-index
5
 
i10-index
3
 
Publications
8
 
Co-authors
14
list available
Contact
No contact links provided.
Resume (English only)
Academic Achievements
  • - Superintelligent Agents Pose Catastrophic Risks: Can Scientist AI Offer a Safer Path? (2025 arXiv preprint)
  • - Can a Bayesian Oracle Prevent Harm from an Agent? (2024 arXiv preprint)
  • - Measuring Goal-Directedness (2024 NeurIPS Spotlight)
  • - The Reasons that Agents Act: Intention and Instrumental Goals (2024 AAMAS Conference)
  • - Discovering Agents (2023 Artificial Intelligence Journal)
  • - On Imperfect Recall in Multi-Agent Influence Diagrams (2023 TARK Conference, Best Paper Award)
  • - Characterising Decision Theories with Mechanised Causal Graphs (2023 GAIA Workshop, AAMAS Conference)
Research Experience
  • - 2025: Started working at LawZero.
  • - May 2024: Started an internship working with Yoshua Bengio on Safe AI for Humanity (SAIFH).
  • - January 2024: Took part in the Alignment Research Engineer Accelerator Programme.
  • - October 2023: Helped organised the Agent Foundations for AI Alignment Workshop.
Education
  • PhD Student at Imperial College London, CDT in Safe and Trusted AI.
Background
  • AI safety researcher, LawZero. PhD Student at Imperial College London, CDT in Safe and Trusted AI. Also involved in the Causal Incentives Working Group.