Published multiple papers in areas such as machine unlearning, interactive learning, and stochastic optimization. Notable papers include: 'Remember What You Want to Forget: Algorithms for Machine Unlearning' (NeurIPS 2021), 'Ticketed Learning-Unlearning Schemes' (COLT 2023), 'Selective Sampling and Imitation Learning via Online Regression' (NeurIPS 2023), 'On the Complexity of Adversarial Decision Making' (NeurIPS 2022, oral presentation), 'Hybrid RL: Using Both Offline and Online Data Can Make RL Efficient' (ICLR 2023), 'SGD: The role of Implicit Regularization, Batch-size and Multiple Epochs' (NeurIPS 2021), 'From Gradient Flow on Population Loss to Learning with Stochastic Gradient Descent' (NeurIPS 2022), 'The Complexity of Making the Gradient Small in Stochastic Convex Optimization' (COLT 2019, Best Student Paper Award). Recent news includes new papers on arXiv and accepted papers at top conferences.
Research Experience
Currently working at EvolutionaryScale, applying reinforcement learning to advance protein engineering and developing frontier AI for the life sciences.
Education
PhD in Computer Science from Cornell University, advised by Prof. Karthik Sridharan and Prof. Robert D. Kleinberg; Postdoctoral Researcher at MIT, advised by Prof. Alexander (Sasha) Rakhlin; Undergraduate student at IIT Kanpur, India.
Background
Research interests include reinforcement learning, interactive learning, machine unlearning and privacy, control theory, AI for science, large language models, and optimization. Focused on exploring how ideas from reinforcement learning can unlock new capabilities in machine learning models and systems.
Miscellany
Will be an area chair for Algorithmic Learning Theory (ALT) 2025.