Dilip Arumugam
Scholar

Dilip Arumugam

Google Scholar ID: gzHbYVQAAAAJ
Postdoctoral Research Associate - Princeton University
Reinforcement LearningInformation TheoryMachine LearningArtificial Intelligence
Citations & Impact
All-time
Citations
908
 
H-index
14
 
i10-index
20
 
Publications
20
 
Co-authors
13
list available
Resume (English only)
Academic Achievements
  • - Selected Papers & Publications:
  • - On Temporal Credit Assignment and Data-Efficient Reinforcement Learning, RLC Finding the Frame Workshop, 2025
  • - Toward Efficient Exploration by Large Language Model Agents, ICML Exploration in AI Today Workshop, 2025
  • - Trade-Offs Between Tasks Induced by Capacity Constraints Bound the Scope of Intelligence, Proceedings of the 47th Annual Meeting of the Cognitive Science Society (CogSci), 2025
  • - Satisficing Exploration for Deep Reinforcement Learning, RLC Finding the Frame Workshop, 2024
  • - Bayesian Reinforcement Learning with Limited Cognitive Load, Open Mind: Discoveries in Cognitive Science, 2024
  • - Cultural Reinforcement Learning: A Framework for Modeling Cumulative Culture on a Limited Channel, Proceedings of the 45th Annual Meeting of the Cognitive Science Society (CogSci), 2023
  • - Deciding What to Model: Value-Equivalent Sampling for Reinforcement Learning, Advances in Neural Information Processing Systems (NeurIPS), 2022
  • - Planning to the Information Horizon of Bayes-Adaptive Markov Decision Processes via Epistemic State Abstraction, Advances in Neural Information Processing Systems (NeurIPS), 2022
  • - The Value of Information When Deciding What to Learn, Advances in Neural Information Processing Systems (NeurIPS), 2022
Research Experience
  • - Postdoctoral researcher at Princeton University Computer Science Department, working with Tom Griffiths
  • - Internships at Microsoft Research Cambridge, Microsoft Research Redmond, Mila, and Google DeepMind
Education
  • - Ph.D. from Stanford University Computer Science Department, advised by Benjamin Van Roy
  • - M.S. from Stanford University Statistics Department
  • - B.S. and M.S. degrees from Brown University Computer Science Department, advised by Michael Littman, also working closely with Stefanie Tellex
Background
  • Research Interests: data efficiency in reinforcement learning, application of information theory in reinforcement learning, and comparison of sample efficiency between computational and biological decision-making agents.
Miscellany
  • On the academic & industry job markets for the 2025-2026 cycle.