Published work on faster rates for generic and adversarially robust optimization methods; former postdoc Rohan Ghuge presented improved and oracle-efficient online ℓ1 multicalibration at ICML 2025; student Milind Nakul presented work on estimating the stationary mass of a distribution on a mixing sequence frequency by frequency at COLT 2025; presented work on optimal estimation of stationary missing mass on a mixing sequence at the International Indian Statistical Association conference; received the ECE Roger Webb Outstanding Junior Faculty Award; Guanghui Wang received the Apple ML/AI Scholars PhD Fellowship; Chiraag Kaushik and Tyler LaBonte presented their work at NeurIPS 2024; Kuo-Wei Lai received the ARC-ACO Fellowship (Spring 2025 cycle).
Research Experience
Before joining Georgia Tech, spent a semester at the Simons Institute for the Theory of Computing as a research fellow for the program 'Theory of Reinforcement Learning'.
Education
Received B.Tech (with honors) from Indian Institute of Technology, Madras and Ph.D. in Electrical Engineering from University of California, Berkeley.
Background
Broad interests are in game theory, online and statistical learning. Particularly interested in designing learning algorithms that provably adapt in strategic environments, fundamental properties of overparameterized models, and the foundations of multi-agent decision-making.
Miscellany
In spare time, enjoys singing Carnatic vocal music, playing the piano, and long-distance cycling.