Zhenting Qi
Scholar

Zhenting Qi

Google Scholar ID: WZ00HCUAAAAJ
Harvard University
Natural Language ProcessingDeep LearningMachine Learning
Citations & Impact
All-time
Citations
723
 
H-index
13
 
i10-index
14
 
Publications
20
 
Co-authors
33
list available
Resume (English only)
Academic Achievements
  • Paper 'EvoLM: In Search of Lost Language Model Training Dynamics' accepted to NeurIPS 2025 (oral); paper 'Satori: Reinforcement Learning with Chain-of-Action-Thought Enhances LLM Reasoning via Autoregressive Search' accepted to ICML 2025; papers 'Mutual Reasoning Makes Smaller LLMs Stronger Problem-Solvers', 'Quantifying Generalization Complexity for Large Language Models', and 'Follow My Instruction and Spill the Beans: Scalable Data Extraction from Retrieval-Augmented Generation Systems' accepted to ICLR 2025.
Research Experience
  • Worked closely with distinguished researchers including (the late) Prof. Dragomir R. Radev at Yale, Prof. Volodymyr Kindratenko at UIUC, Dr. Li Lyna Zhang at Microsoft Research Asia, Prof. Chuang Gan at MIT-IBM Watson AI Lab, and Prof. James Glass at MIT.
Education
  • Master's degree in Computational Science and Engineering from Harvard; dual bachelor’s degrees in Computer Engineering from UIUC and ZJU (highest honors); recipient of Harvard SEAS Prize Fellowship.
Background
  • 1st year Computer Science Ph.D. student at Harvard University, co-advised by Prof. Yilun Du and Prof. Hima Lakkaraju. Research focuses on developing intelligent and reliable AI systems that benefit human society, including reasoning, understanding and enhancing reasoning capabilities in foundation models, developing AI systems that generalize effectively to OOD scenarios, training (multi-)agents for compositional reasoning tasks, improving understanding of foundation models and AI systems, enhancing controllability and robustness, and designing scalable methods to ensure reliability while advancing capabilities.
Miscellany
  • Will be joining Google DeepMind (Mountain View office) as a Student Researcher, working on language model post-training.