Outstanding Paper Award at NAACL 2024 (A Pretrainer’s Guide to Training Data)
Paper 'Consent in Crisis' accepted to NeurIPS 2024, covered by NYT, 404 Media, Vox, and Yahoo! Finance
3 Oral and 1 Spotlight papers accepted to ICML 2024 on topics including AI safe harbor, societal impact of open foundation models, autonomous weapons risks, and data authenticity
Paper 'Multimodal Data Provenance' accepted to ICLR 2025
Data Provenance Initiative awarded Mozilla Data Futures Lab grant and MIT Generative AI Impact Award ($70,000)
Core writing team for the International AI Safety Report
Lead organizer of The Future of Third-Party AI Evaluation Workshop (Dec 2024)
Background
PhD Candidate at MIT
Research focuses on the intersection of AI and policy: responsibly training, evaluating, and governing general-purpose AI systems
Leads the Data Provenance Initiative
Led the Open Letter on A Safe Harbor for Independent AI Evaluation & Red Teaming
Contributed to training models like Bloom, Aya, and Flan-T5/PaLM