Anthony GX-Chen
Scholar

Anthony GX-Chen

Google Scholar ID: 7jAlFsIAAAAJ
NYU
Machine LearningReinforcement LearningArtificial IntelligenceNeuroscience
Citations & Impact
All-time
Citations
140
 
H-index
4
 
i10-index
3
 
Publications
11
 
Co-authors
20
list available
Resume (English only)
Academic Achievements
  • 2025: 'KL-Regularized Reinforcement Learning is Designed to Mode Collapse', NeurIPS Workshop on Foundations of Reasoning in Language Models (accepted)
  • 2025: 'Language Agents Mirror Human Causal Reasoning Biases. How Can We Help Them Think Like Scientists?', Conference on Language Modelling (COLM)
  • 2025: 'Efficient Exploration and Discriminative World Model Learning with an Object-Centric Abstraction', ICLR
  • 2024: 'Testing Causal Hypotheses through Hierarchical Reinforcement Learning', NeurIPS Workshop on Intrinsically Motivated Open-ended Learning
  • 2024: 'Light-weight probing of unsupervised representations for reinforcement learning', Reinforcement Learning Conference (RLC), co-authored with Yann LeCun et al.
Background
  • 5th-year Ph.D. candidate at NYU's CILVR Lab and Center for Data Science
  • Research focuses on understanding the reinforcement learning (RL) framework and developing better RL algorithms
  • Key questions: efficient exploration and autonomous world modeling, scalable RL with minimal tricks, leveraging foundation models to discover unknowns
  • Master's thesis introduced new value function decomposition methods in RL, linked to hippocampal neuroscience theories
  • Undergraduate collaborations with researchers in psychiatric genomics, computational neuroscience, and theoretical neuroscience