JB Lanier
Scholar

JB Lanier

Google Scholar ID: 6IkG2m0AAAAJ
UC Irvine
Reinforcement LearningMultiagent SystemsGame TheorySim-to-real
Citations & Impact
All-time
Citations
310
 
H-index
7
 
i10-index
6
 
Publications
18
 
Co-authors
8
list available
Resume (English only)
Academic Achievements
  • Adapting World Models with Latent-State Dynamics Residuals (Preprint, March 2025)
  • Toward Optimal Policy Population Growth in Two-Player Zero-Sum Games (ICLR 2024)
  • Selective Perception: Learning Concise State Descriptions for Language Model Actors (NAACL 2024)
  • Feasible Adversarial Robust Reinforcement Learning for Underspecified Environments (NeurIPS 2022 Deep RL Workshop)
  • Self-Play PSRO: Toward Optimal Populations in Two-Player Zero-Sum Games (arXiv, 2022)
  • Anytime PSRO for Two-Player Zero-Sum Games (arXiv, 2022)
  • XDO: A Double Oracle Algorithm for Extensive-Form Games (NeurIPS 2021)
  • Improving Social Welfare While Preserving Autonomy via a Pareto Mediator (arXiv, 2021)
  • OffWorld Gym: Open-Access Physical Lunar Analog Environment for Reinforcement Learning and Robotics Research (COSPAR 2021)
  • Pipeline PSRO: A Scalable Approach for Finding Approximate Nash Equilibria in Large Games (NeurIPS 2020, equal contribution)
  • ColosseumRL: A Framework for Multiagent Reinforcement Learning in N-Player Games (COMARL AAAI 2020)
  • Curiosity-Driven Multi-Criteria Hindsight Experience Replay (NeurIPS 2019 Deep RL Workshop)
Background
  • PhD Student at UC Irvine
  • Specializes in Deep Reinforcement Learning for multi-agent systems and robotics
  • Recent work focuses on efficiently training strong agents for two-player competitive games
  • Currently researching simulation-to-real transfer for robotic control
  • Developing new model-based RL methods that generate plannable world models for novel environments
  • Aims to build reliable agents capable of reacting to unexpected situations