Published multiple papers such as 'Real World Reinforcement Learning of Active Perception Behaviors' (NeurIPS 2025), 'RoboArena: Distributed Real-World Evaluation of Generalist Robot Policies' (CoRL 2025), 'Reset-free Reinforcement Learning with World Models' (TMLR 2025), 'The Belief State Transformer' (ICLR 2025), 'The Value of Sensory Information to a Robot' (ICLR 2025), 'Privileged Sensing Scaffolds Reinforcement Learning' (ICLR 2024, Spotlight, 5% acceptance rate, 3rd Highest Rated Paper in ICLR), and 'Planning Goals for Exploration' (ICLR 2023, Spotlight, 5% acceptance rate, Best Paper Award at CoRL22 Roboadapt Workshop).
Research Experience
Research focuses on world models and planning, sensory requirements of policy learning, and learning comprehensive behaviors. Specific projects include learning to stack blocks without rewards, zero-shot transfer to new robot arms, and showing that LLMs trained with world modeling objectives are better at planning.
Education
Currently a CS Ph.D. student at the University of Pennsylvania, advised by Dinesh Jayaraman; and a student researcher at Microsoft AI Frontiers with John Langford. Previously received BS/MS in CS at the University of Southern California and worked with Joseph J. Lim on RL.
Background
Research interests include deep learning, reinforcement learning (RL), world models, and their applications in robotics and large language models (LLMs). Finds it fun to advance artificial intelligence, ranging from virtual agents to physical robots.
Miscellany
Looking for research-oriented roles for Fall 2025 / Spring 2026 involving aspects like deep learning, deep reinforcement learning, world modeling, sequence prediction, LLMs, robotics, embodied AI.