Malcolm Strens
Scholar

Malcolm Strens

Google Scholar ID: O4ZsVzYAAAAJ
Independent Research
Reinforcement LearningComputational Statistics
Citations & Impact
All-time
Citations
350
 
H-index
6
 
i10-index
4
 
Publications
20
 
Co-authors
5
list available
Contact
No contact links provided.
Resume (English only)
Academic Achievements
  • Published multiple significant papers, including 'A Bayesian framework for reinforcement learning' (2000), which is his most cited work as it initiated the field of Posterior Sampling for Reinforcement Learning; other works include using direct search optimization for MCMC sampling (2002), evolutionary MCMC sampling and optimization in discrete spaces (2003), etc.
Research Experience
  • He has been involved in several research projects related to reinforcement learning, including proposing a method called Posterior Sampling for Reinforcement Learning, which uses Bayesian estimation of the environment to express uncertainty; he also developed an effective MCMC framework for vector space sampling or optimization.
Background
  • His research interests include reinforcement learning, Bayesian approaches to machine learning, Markov Chain Monte Carlo (MCMC) sampling techniques, and dynamic replanning in multi-robot task allocation.