Several papers accepted at top conferences such as NeurIPS 2025 and ICML 2025, and has given keynotes and tutorials at various international conferences. One of the papers, 'On the Surprising Effectiveness of Large Learning Rates under Standard Width Scaling,' was accepted to NeurIPS 2025 (Spotlight).
Research Experience
Worked as an Applied Scientist at the AGI Foundations Lab at Amazon, focusing on fundamental research on the theory and science of scaling large language models. Also spent 6 months at the Causality Lab of Amazon Research during her Ph.D.
Education
Ph.D. from the International Max Planck Research School for Intelligent Systems, supervised by Debarghya Ghoshdastidar (Theoretical Foundations of Artificial Intelligence group, Technical University of Munich) and Ulrike von Luxburg (Theory of Machine Learning group, University of Tuebingen). Thesis received the Wilhelm Schickard Dissertation Award (Outstanding Dissertation).
Background
Lecturer (Assistant Professor) at Gatsby Unit, UCL. Research interests include developing efficient, reliable, and trustworthy machine learning models by understanding the theoretical foundations of deep learning and causal learning. Aims to address challenges in deep learning theory and causality.
Miscellany
Scheduled to give talks and tutorials at multiple international conferences and workshops, including CAMSAP, UCL Neuro AI Annual Conference, WiML Symposium @ ICML 2025, and more.