Valentin Thomas
Scholar

Valentin Thomas

Google Scholar ID: XRhKEGMAAAAJ
ML scientist, Layer6.ai
machine learningreinforcement learning
Citations & Impact
All-time
Citations
485
 
H-index
12
 
i10-index
13
 
Publications
20
 
Co-authors
16
list available
Resume (English only)
Academic Achievements
  • CausalPFN: Amortized Causal Effect Estimation via In-Context Learning, NeurIPS 2025 (Spotlight)
  • TabDPT: Scaling Tabular Foundation Models, NeurIPS 2025
  • Retrieval and Fine-tuning for In-Context Tabular Models, NeurIPS 2024, ICML 2024 Workshop on In-Context Learning
  • In-Context Data Distillation with TabPFN, ICLR 2024 Workshop on Mathematical and Empirical Understanding of Foundation Models
  • Bridging the Gap Between Target Networks and Functional Regularization, TMLR 2023, NeurIPS 2021 DeepRL workshop
  • On the role of overparameterization in off-policy Temporal Difference learning with linear function approximation, NeurIPS 2022
  • The Role of Baselines in Policy Optimization, NeurIPS 2022
  • Beyond variance reduction: Understanding the true impact of baselines on policy optimization, ICML 2021
Research Experience
  • Currently a Senior Machine Learning Scientist at Layer6, mainly working on foundation models for tabular data and time series.
Education
  • PhD, Mila; Supervisors: Yoshua Bengio and Nicolas Le Roux
Background
  • Research interests include reinforcement learning, deep learning, and optimization. Previously worked on PhD at Mila, focusing on reinforcement learning and deep learning.
Miscellany
  • Personal website includes links to Google Scholar, Twitter, and Github.