Heidy Khlaaf
Scholar

Heidy Khlaaf

Google Scholar ID: sjaNfa0AAAAJ
Chief AI Scientist, AI Now Institute
AI assuranceformal verificationmachine learningsystems engineeringsafety auditing
Citations & Impact
All-time
Citations
7,413
 
H-index
12
 
i10-index
13
 
Publications
20
 
Co-authors
0
 
Resume (English only)
Academic Achievements
  • Published multiple papers, including 'Safety Co-Option and Compromised National Security: The Self-Fulfilling Prophecy of Weakened AI Risk Thresholds' and others. Won the best paper award at CAV 2015 and was subsequently invited to JACM. Contributed to the development of various standards and auditing frameworks for AI and ML, including policy and regulatory frameworks for US and UK regulators.
Research Experience
  • Serves as the Chief AI Scientist at the AI Now Institute, responsible for evaluating and ensuring the safety of AI in autonomous weapons systems. Led the safety evaluation of Codex at OpenAI, developing a framework that measures a model’s performance outcomes against a cross-functional risk assessment. Previously, she was the Engineering Director of the AI Assurance team at Trail of Bits, where she led cyber evaluations as part of the launch of the UK AI Safety Institute.
Education
  • Completed her Computer Science PhD at University College London in 2017, advised by Nir Piterman. Recipient of the prestigious NSF GRFP award.
Background
  • Chief AI Scientist, focusing on the assessment and safety of AI within autonomous weapons systems. Specializes in the evaluation, specification, and verification of complex or autonomous software implementations, particularly in safety-critical systems. She has extensive experience in leading system safety audits, ranging from UAVs to large nuclear power plants, contributing to the construction of safety cases for safety-critical software.
Miscellany
  • Personal interests include climbing.
Co-authors
0 total
Co-authors: 0 (list not available)