Paper 'The False Promise of Zero-Shot Super-Resolution in Machine-Learned Operators' accepted as a Top 5% Spotlight paper at ICLR 2025; involved in various research projects on machine learning interpretability and topology-aware knowledge propagation in decentralized learning.
Research Experience
Worked at Lawrence Berkeley National Laboratory's ML and Analytics group this summer studying the limits of machine-learned operators for modeling PDEs; presented a poster titled 'The False Promise of Super-Resolution of Machine-Learned Operators' at the CSGF Program Review in Washington, DC; gave a talk on 'Mitigating Memorization in Language Models' at the Midwest Speech and Language Days at the University of Notre Dame.
Education
Ph.D.: University of Chicago, co-advised by Ian Foster and Kyle Chard; Bachelors: University of North Carolina, Chapel Hill, majoring in Computer Science and Mathematics, minoring in Environmental Science.
Background
Computer Science Ph.D. student, with research interests in developing machine learning interpretability methods, particularly focusing on systematically reverse engineering neural networks to interpret their weights. The work aims at localizing sources of model failure within weight-space and developing efficient methods to correct model behavior.
Miscellany
Interviewed by the Department of Energy's 'Science in Parallel' podcast about the recent Nobel prizes in Physics and Chemistry and their implications for ML and the domain sciences.