Emil Carlsson
Scholar

Emil Carlsson

Google Scholar ID: VZhBQWQAAAAJ
Research Scientist
Reinforcement LearningBanditsGame TheoryCognitive Science
Citations & Impact
All-time
Citations
102
 
H-index
6
 
i10-index
5
 
Publications
18
 
Co-authors
13
list available
Resume (English only)
Academic Achievements
  • Published multiple research papers in leading AI/ML conferences such as NeurIPS and AISTATS. Some of the publications include:
  • - Active preference learning for ordering items in- and out-of-sample (NeurIPS 2024)
  • - Pure exploration in bandits with linear constraints (AISTATS 2024)
  • - Cultural evolution via iterated learning and communication explains efficient color naming systems (Journal of Language Evolution, 2024)
  • - Variational Quantum Optimization with Continuous Bandits (Under Submission, 2025)
  • - Identifiable latent bandits: Combining observational data and exploration for personalized healthcare (ICML Workshop, 2024)
  • - Learning Efficient Recursive Numeral Systems via Reinforcement Learning (AI for Math Workshop @ ICML, 2024)
  • - Fast Treatment Personalization with Latent Bandits in Fixed-Confidence Pure Exploration (TMLR, 2023)
  • - Towards Learning Abstractions via Reinforcement Learning (AIC, 2022)
  • - Pragmatic reasoning in structured signaling games (CogSci, 2022)
  • - Thompson sampling for bandits with clustered arms (IJCAI, 2021)
Research Experience
  • Working at Sleep Cycle, focusing on the development of data-driven decision-making systems.
Education
  • Ph.D. in Computer Science from Chalmers University of Technology.
Background
  • Currently working as a Research Scientist at Sleep Cycle, focusing on developing reliable data-driven decision-making systems. Primary research interests are reinforcement learning and bandit algorithms, particularly in improving the efficiency and effectiveness of sequential decision processes.
Miscellany
  • Can be reached at emil(at)sleepcycle(dot)com or on LinkedIn.