Kehan Guo
Scholar

Kehan Guo

Google Scholar ID: t8iRCLUAAAAJ
University of Notre Dame
LLMMachine ReasoningGenerative ModelsXAIAI for Science
Citations & Impact
All-time
Citations
693
 
H-index
11
 
i10-index
11
 
Publications
20
 
Co-authors
2
list available
Resume (English only)
Academic Achievements
  • Multiple papers accepted to top conferences like NeurIPS 2025, such as ChemOrch (LLMs + chemical tools) and AdaReasoner — Adaptive Reasoning Enables More Flexible Thinking (Spotlight). Also published preprints on causally-enhanced reinforcement policy optimization and uncertainty-aware yield prediction accepted to CIKM 2025. Involved in various research projects, including a survey on Artificial Intelligence in Spectroscopy.
Research Experience
  • Building large language models and generative AI systems that reason reliably in complex scientific domains. Current research focuses on world modeling, where models learn and are constrained by rules to mirror how the world works, and self-evolving LMs that critique, verify, and iteratively improve via tools and interaction loops. Methodologically working on RL post-training/alignment, uncertainty calibration, and agentic tool use; application-wise emphasizing AI for Science, especially in chemistry, including literature-grounded extraction, retrosynthesis planning, and data curation.
Education
  • Pursuing a Ph.D. at the University of Notre Dame under the supervision of Prof. Xiangliang Zhang, with a focus on AI applications in science, particularly chemistry.
Background
  • Currently a fourth-year Ph.D. student in the MINE Lab at the University of Notre Dame, advised by Prof. Xiangliang Zhang. Research interests include building LLM-based systems that reason about science, check themselves, and improve via tools. Initially focused on AI-for-Science (chemistry), now focusing on world-modeling for LMs and reflective loops, aiming to bridge scientific reasoning and reflective agent design.
Miscellany
  • Will join Amazon AWS as an Applied Scientist Intern in Summer 2025.