Siheng Xiong
Scholar

Siheng Xiong

Google Scholar ID: PbNzCQoAAAAJ
Georgia Institute of Technology
Machine LearningNatural Language ProcessingLanguage ModelKnowledge Graph
Citations & Impact
All-time
Citations
496
 
H-index
12
 
i10-index
14
 
Publications
20
 
Co-authors
21
list available
Resume (English only)
Academic Achievements
  • Published several papers, including:
  • - 'Large Language Models Towards Reasoning' (Preprint 2025)
  • - 'DeepControl: Enhancing Research Agents via Process-Level Verification' (Preprint 2025)
  • - 'Enhancing Long Chain-of-Thought Reasoning with Multi-Path Planning with Aggregation' (ACL 2025)
  • - 'Deliberate Reasoning in Language Models as Structure-Aware Planning with an Accurate World Model' (NAACL 2025)
  • - 'CausalEval: Towards Better Causal Reasoning in Language Models' (EMNLP 2024)
  • - 'Can LLMs Reason in the Wild with Programs?' (ACL 2024)
  • - 'Large Language Models Can Learn Temporal Reasoning' (ACL 2024)
Research Experience
  • Involved in multiple research projects, including:
  • - Large Language Models Towards Reasoning
  • - DeepControl: Enhancing Research Agents via Process-Level Verification
  • - Enhancing Long Chain-of-Thought Reasoning with Multi-Path Planning with Aggregation
  • - Deliberate Reasoning in Language Models as Structure-Aware Planning with an Accurate World Model
  • - CausalEval: Towards Better Causal Reasoning in Language Models
  • - Can LLMs Reason in the Wild with Programs?
  • - Large Language Models Can Learn Temporal Reasoning
Education
  • He is currently a Ph.D. candidate in Machine Learning at the Georgia Institute of Technology, advised by Prof. Faramarz Fekri. Prior to this, he earned his Bachelor’s degree from Xi’an Jiaotong University and his Master’s degree from Shanghai Jiao Tong University.
Background
  • Research interests include reasoning and planning with large language models and knowledge graphs, as well as efficient long-context language modeling. Currently, his research focuses on advancing the reasoning capabilities of large language models.