Kavosh Asadi
Scholar

Kavosh Asadi

Google Scholar ID: -2qyBJEAAAAJ
Meta
Reinforcement LearningAI AlignmentOptimization
Citations & Impact
All-time
Citations
3,027
 
H-index
15
 
i10-index
19
 
Publications
20
 
Co-authors
20
list available
Resume (English only)
Academic Achievements
  • Published over a dozen papers at top-tier AI conferences including NeurIPS, ICML, ICLR, AAAI, and ACL
  • ICML 2024: Paper on learning the target network in function space accepted
  • ICLR 2024: Paper on foundation models for continual learning accepted
  • NeurIPS 2023: Two papers accepted
  • RLC (first Reinforcement Learning Conference): Paper on fairness in RL accepted
  • NeurIPS 2022: Two papers accepted
  • AAAI 2021: Two papers accepted
  • NeurIPS 2021: One paper accepted
  • AISTATS 2022: One paper accepted
  • Co-authored the RL chapter in the D2L (Dive into Deep Learning) book
Research Experience
  • 2025–Present: Senior Scientist at Meta’s RL team
  • 2020–2025: Scientist at Amazon (Senior from 2024)
  • Gave a talk at Seattle Mind and Machines meetup (UW)
  • Gave a talk at Amazon’s RL reading group
  • Guest lecturer for Harvard’s ML class
Background
  • AI scientist aiming to understand the computational principles underlying intelligence
  • Focuses on agents that interact with sequential environments and improve behavior through trial and error—i.e., the reinforcement learning problem
  • Academically interested in the optimization problem in value function learning
  • Applies research to developing assistive AI agents that interact with humans and learn from feedback
  • Aspires to build AI agents that co-exist with humans and help them live their best lives
Miscellany
  • Moved from Oahu, Hawaii to Seattle, WA
  • Moved from SF Bay Area to work remotely from Hawaii
  • Moved from Providence, RI to SF Bay Area
  • Currently based in the Bay Area working at Meta
  • Open to connecting with AI scientists, engineers, and students via email (firstname@alumni.brown.edu)