Published multiple papers, including 'Safety Co-Option and Compromised National Security: The Self-Fulfilling Prophecy of Weakened AI Risk Thresholds' and others. Won the best paper award at CAV 2015 and was subsequently invited to JACM. Contributed to the development of various standards and auditing frameworks for AI and ML, including policy and regulatory frameworks for US and UK regulators.
Research Experience
Serves as the Chief AI Scientist at the AI Now Institute, responsible for evaluating and ensuring the safety of AI in autonomous weapons systems. Led the safety evaluation of Codex at OpenAI, developing a framework that measures a model’s performance outcomes against a cross-functional risk assessment. Previously, she was the Engineering Director of the AI Assurance team at Trail of Bits, where she led cyber evaluations as part of the launch of the UK AI Safety Institute.
Education
Completed her Computer Science PhD at University College London in 2017, advised by Nir Piterman. Recipient of the prestigious NSF GRFP award.
Background
Chief AI Scientist, focusing on the assessment and safety of AI within autonomous weapons systems. Specializes in the evaluation, specification, and verification of complex or autonomous software implementations, particularly in safety-critical systems. She has extensive experience in leading system safety audits, ranging from UAVs to large nuclear power plants, contributing to the construction of safety cases for safety-critical software.