Browse publications on Google Scholar (top-right) ↗
Resume (English only)
Academic Achievements
2025: 'Extinction of the human species: What could cause it and how likely is it to occur?', Cambridge Prisms: Extinction
2024: Co-authored 'Classifying Global Catastrophic Risk'
2024: Co-authored 'Accumulating Evidence Using Crowdsourcing and Machine Learning'
2022: 'Strengthen biosecurity when rewiring global food supply chains', Nature
2022: 'Responsible artificial intelligence in agriculture requires systemic understanding of risks and externalities', Nature Machine Intelligence
2020: 'Overcoming Barriers to Cross-cultural Cooperation in AI Ethics and Governance', Philosophy & Technology
2019: 'Bridging near- and long-term concerns about AI', Nature Machine Intelligence
2018: Published multiple papers on global catastrophic risks, existential risk, and AI evolution frameworks
Contributed to GPAI Sub-Working Group Report 'AI & Pandemic Response' (November 2020)
Research Experience
Founding Executive Director of the Centre for the Study of Existential Risk (CSER) at the University of Cambridge, focusing on global risks from emerging technologies
Project managed the Oxford Martin Programme on the Impacts of Future Technology (2011–2014)
Co-developed the Strategic AI Research Centre (Cambridge-Oxford collaboration) in 2015
Co-founded the Leverhulme Centre for the Future of Intelligence (Cambridge-Oxford-Imperial-Berkeley collaboration) in 2015/16
At Oxford, established the FHI-Amlin Collaboration on Systemic Risk (academic-reinsurance partnership on catastrophic risk modelling) and led several other research programmes