Glen Berseth
Scholar

Glen Berseth

Google Scholar ID: -WZcuuwAAAAJ
Assitant Professor - Université de Montréal
Reinforcement LearningRoboticsDeep LearningMachine Learning
Citations & Impact
All-time
Citations
3,727
 
H-index
26
 
i10-index
46
 
Publications
20
 
Co-authors
31
list available
Resume (English only)
Academic Achievements
  • 2025: Three papers accepted to ICLR 2025 on efficient imitation learning, exploration for GFNs, and real-time reinforcement learning
  • 2024: Three papers accepted to NeurIPS 2024 on inverse safe RL, converging/scaling deep RL algorithms, and amortizing intractable inference
  • 2024: Paper accepted to RSS 2024 on creating more diverse datasets for generalist robot policies
  • 2024: Best paper award at ICLR 2024 on VLMs for robotics
  • 2024: Five papers accepted to ICLR 2024 on efficient exploration, generalization in sequence planning, intelligent switching for reset-free RL, LLMs + RL for drug discovery, and latent diffusion for offline RL
  • 2023: Paper accepted to NeurIPS 2023 on efficient exploration in finite horizons
  • 2023: Paper accepted to iROS on using offline RL to bootstrap human-robot interfaces
  • 2023: Paper accepted to RA-L on torque vs. position control for humanoids and Sim2Real training
  • 2022: IROS paper on quadruped robots learning soccer shooting; IROS Best RoboCup Paper Award Finalist
  • 2021: NeurIPS paper on surprise minimization in partially observed environments
  • 2021: ICRA paper on RL for bipedal robots featured in MIT Technology Review
Background
  • Assistant Professor at Université de Montréal
  • Core academic member of Mila - Quebec AI Institute
  • Canada CIFAR AI Chair
  • Member of L'Institut Courtois
  • Co-director of the Robotics and Embodied AI Lab (REAL)
  • Research focuses on machine learning and real-world sequential decision-making problems (e.g., planning/RL) in robotics, scientific discovery, and adaptive clean technology
  • Specific research areas include human-robot collaboration, generalization, reinforcement learning, continual learning, meta-learning, multi-agent learning, and hierarchical learning
  • Teaches data science and robot learning courses at Université de Montréal and Mila
  • Co-created a new conference for reinforcement learning research
  • Supports the 'Slow Science' movement