Currently interested in building agents that can efficiently learn new tasks in new environments.
Believes such agents must learn a latent world model combining (a) a representation model mapping observations to a compact latent space, and (b) a generative world model describing latent dynamics.
Thinks LLMs/VLMs can serve as rich priors for solving new tasks.
Research lies at the intersection of representation learning, model-based reinforcement learning, and in-context learning.