Preprints: The Transformer Cookbook, etc.; Under Submission: Missingness Bias Calibration in Feature Attribution Explanations, etc.; Publications: Probabilistic Stability Guarantees for Feature Attributions (NeurIPS 2025), Probabilistic Soundness Guarantees in LLM Reasoning Chains (EMNLP 2025), etc.
Research Experience
Currently a postdoc at the UT Austin Institute for Foundations of Machine Learning.
Education
Ph.D. in Computer Science from the University of Pennsylvania, supervised by Rajeev Alur and Eric Wong; Undergraduate at Yale University, working with Ruzica Piskac.
Background
Research interests: Making AI systems safe and robust using ideas from formal methods, optimization, and more. Recent interests include discrete diffusion for code generation, theorem proving in research mathematics, test-time scaling and verification of LLM reasoning.