Private Algorithms for Stochastic Saddle Points and Variational Inequalities: Beyond Euclidean Geometry, NeurIPS 2024
Public-data Assisted Private Stochastic Optimization: Power and Limitations, NeurIPS 2024
Differentially Private Non-Convex Optimization under the KL Condition with Optimal Rates, ALT 2024
Differentially Private Algorithms for the Stochastic Saddle Point Problem with Optimal Rates for the Strong Gap, COLT 2023
Faster Rates of Convergence to Stationary Points in Differentially Private Optimization, ICML 2023
Differentially Private Generalized Linear Models Revisited, NeurIPS 2022
Differentially private stochastic optimization: New results in convex and non-convex settings, NeurIPS 2021
Research Experience
Currently a postdoc in the Department of Computer Science at The University of Toronto, hosted by Aleksander Nikolov and Nicolas Papernot. Also a postdoctoral affiliate with the Vector Institute.
Education
Ph.D. from The Ohio State University, advised by Raef Bassily
Background
His research focuses on expanding the theoretical foundations of machine learning. He is specifically interested in the design and analysis of machine learning and optimization algorithms which operate under algorithmic constraints, such as privacy, stability, and fairness. His work both characterizes the limits of private learning under such constraints and develops techniques for avoiding bottlenecks in trustworthy machine learning by leveraging insights from theory.