Paper 'Is My Prediction Arbitrary? The Confounding Effects of Variance in Fair Classification Benchmarks' won an Honorable Mention for Best Student Paper in the AI for Social Impact track at AAAI-2024; Awarded a DARPA YFA Grant for 'Decentralized Online Parameter-Efficient Fine-Tuning of Compressed Models'; Received the Google Research Scholar Award; Paper 'Optimal Complexity in Decentralized Training' (Yucheng Lu, Chris De Sa) won an Outstanding Paper Award Honorable Mention at ICML 2021 (awarded to 4 papers out of 1184 publications); Won the National Science Foundation CAREER award; Received the Mr. & Mrs. Richard F. Tucker Excellence in Teaching Award from the College of Engineering; Awarded an NSF Robust Intelligence Small Grant for 'Reliable Machine Learning in Hyperbolic Spaces'.
Research Experience
Leads the Relax ML Lab; Program Chair at MLSys 2024; Gave a keynote speech at the International Conference on AI-ML Systems; Teaches multiple courses related to machine learning.
Education
Graduated from Stanford University in 2017, advised by Kunle Olukotun and Chris Ré.
Background
Associate Professor in the Computer Science department at Cornell University, with research interests in algorithmic, software, and hardware techniques for high-performance machine learning, focusing on relaxed-consistency variants of stochastic algorithms such as asynchronous and low-precision stochastic gradient descent (SGD) and Markov chain Monte Carlo. Aims to use these techniques to construct efficient, parallel, and distributed data analytics and machine learning frameworks.
Miscellany
Co-organized the Cornell Institute for Digital Agriculture Hackathon; Joined the executive committee of the Cornell Institute for Digital Agriculture (CIDA); Has been teaching PLSCI 7202, a short course on applications of machine learning to plant science, together with faculty from other departments.