Elliott Thornley
Scholar

Elliott Thornley

Google Scholar ID: uN6WoScAAAAJ
MIT
AI safetyAI alignmentethicsdecision theory
Citations & Impact
All-time
Citations
287
 
H-index
7
 
i10-index
5
 
Publications
20
 
Co-authors
5
list available
Resume (English only)
Academic Achievements
  • Shutdownable Agents through POST-Agency - Draft/Talk
  • Towards Shutdownable Agents via Stochastic Choice (with Alexander Roman, Christos Ziakas, Leyton Ho, and Louis Thomson) - Technical AI Safety Conference, 2025. Open-access article/Poster
  • The Shutdown Problem: An AI Engineering Puzzle for Decision Theorists - Philosophical Studies, 2024. Open-access article/Draft
  • A Non-Identity Dilemma for Person-Affecting Views - Draft
  • A Fission Problem for Person-Affecting Views - Ergo, forthcoming. Draft
Research Experience
  • Currently working on AI safety at MIT, focusing on using ideas from decision theory to design and train safer artificial agents.
Background
  • A Postdoctoral Associate at MIT. From August 2026, he will be an Assistant Professor of Philosophy at NUS. His research interests include AI safety, decision theory, ethics, and the moral importance of future generations.