Ninareh Mehrabi
Scholar

Ninareh Mehrabi

Google Scholar ID: 3SfPdIQAAAAJ
Amazon
AI SafetyResponsible AI
Citations & Impact
All-time
Citations
8,189
 
H-index
15
 
i10-index
21
 
Publications
20
 
Co-authors
3
list available
Resume (English only)
Academic Achievements
  • - A Survey on Bias and Fairness in Machine Learning, ACM Computing Surveys (CSUR)
  • - FLIRT: Feedback Loop In-context Red Teaming, EMNLP 2024
  • - Towards Safety Reasoning in LLMs: AI-agentic Deliberation for Policy-embedded CoT Data Creation, ACL 2025 Findings
  • - DATA ADVISOR: Dynamic Data Curation for Safety Alignment of Large Language Models, EMNLP 2024
  • - And other papers published in NAACL, ACL, etc.
Research Experience
  • - Senior Research Scientist, Meta Superintelligence Labs
  • - Applied Scientist, Amazon AGI
  • - Postdoctoral Researcher, Amazon Alexa AI Responsible AI Team
Education
  • Ph.D. from University of Southern California's Information Sciences Institute, advisors: Aram Galstyan and Fred Morstatter, part of the MINDS group.
Background
  • Senior research scientist at Meta's Superintelligence Labs, working on red teaming and frontier risks. Previously, an applied scientist at Amazon AGI, developing responsible AI systems. Prior to that, a postdoctoral researcher at Amazon Alexa AI's Responsible AI team.
Miscellany
  • Invited talk at NSERC Responsible AI (October 2024). My work was featured in Cloudwalkers ISI and the Inventors of the Future documentary.