Antoine Dedieu
Scholar

Antoine Dedieu

Google Scholar ID: Hgoc3FUAAAAJ
Senior Research Scientist at Google DeepMind
Reinforcement LearningRepresentation LearningBayesian modelingStatistical Learning
Citations & Impact
All-time
Citations
798
 
H-index
11
 
i10-index
12
 
Publications
20
 
Co-authors
13
list available
Resume (English only)
Academic Achievements
  • “Improving Transformer World Models for Data-Efficient RL” received Outstanding Paper Award at ICLR 2025 World Model Workshop; to appear at ICML 2025.
  • Co-first author of “DMC-VB: A Benchmark for Representation Learning for Control with Visual Distractors,” NeurIPS 2024.
  • Contributed to “Diffusion Model Predictive Control,” TMLR 2025.
  • “Learning Cognitive Maps from Transformer Representations for Efficient Planning in Partially Observed Environments,” ICML 2024.
  • “Schema-learning and rebinding as mechanisms of in-context learning and emergence,” NeurIPS 2023 Spotlight.
  • “Learning noisy-OR Bayesian Networks with Max-Product Belief Propagation,” ICML 2023.
  • “Perturb-and-max-product: Sampling and learning in discrete energy-based models,” NeurIPS 2021.
  • “Sample-Efficient L0-L2 Constrained Structure Learning of Sparse Ising Models,” AAAI 2021.
Background
  • Senior Research Scientist at Google DeepMind.
  • Currently interested in building agents that can efficiently learn new tasks in new environments.
  • Believes such agents must learn a latent world model combining (a) a representation model mapping observations to a compact latent space, and (b) a generative world model describing latent dynamics.
  • Thinks LLMs/VLMs can serve as rich priors for solving new tasks.
  • Research lies at the intersection of representation learning, model-based reinforcement learning, and in-context learning.