Published multiple significant papers, including 'A Bayesian framework for reinforcement learning' (2000), which is his most cited work as it initiated the field of Posterior Sampling for Reinforcement Learning; other works include using direct search optimization for MCMC sampling (2002), evolutionary MCMC sampling and optimization in discrete spaces (2003), etc.
Research Experience
He has been involved in several research projects related to reinforcement learning, including proposing a method called Posterior Sampling for Reinforcement Learning, which uses Bayesian estimation of the environment to express uncertainty; he also developed an effective MCMC framework for vector space sampling or optimization.
Background
His research interests include reinforcement learning, Bayesian approaches to machine learning, Markov Chain Monte Carlo (MCMC) sampling techniques, and dynamic replanning in multi-robot task allocation.