Weiqi Wang
Scholar

Weiqi Wang

Google Scholar ID: ZKgZ7jEAAAAJ
Final Year PhD @HKUST | Intern @Amazon
Large Language ModelReinforcement LearningReasoningPlanningBenchmarking
Citations & Impact
All-time
Citations
706
 
H-index
16
 
i10-index
20
 
Publications
20
 
Co-authors
86
list available
Resume (English only)
Academic Achievements
  • EMNLP 2024 Outstanding Paper Award (2024)
  • Hong Kong PhD Fellowship (HKPFS, 2022–2026)
  • HKUST RedBird PhD Scholarship (2022)
  • HKUST RedBird Academic Excellence Award for Continuing PhD Students (2023–2024, 2024–2025)
  • Dean’s List, School of Engineering, HKUST (Fall 2018, Fall 2019, Fall 2020, Spring 2022)
  • University’s Scholarship Scheme for Continuing Undergraduate Students (2019–2022)
  • Area Chair for top-tier conferences: ACL Rolling Review (2024–present), ACL (2024, 2025), EMNLP (2024, 2025), COLING (2025), NAACL (2025), COLM (2025), ICML (2025)
  • Reviewer for ACL, EMNLP, NAACL, EACL, KDD, NeurIPS, ICLR, AACL, etc.
  • Volunteer at IJCAI-2023
Background
  • Final-year Ph.D. student in Computer Science and Engineering at The Hong Kong University of Science and Technology, supervised by Professor Yangqiu Song
  • Currently an Applied Scientist Intern at Amazon Stores Foundational AI in Palo Alto, working with Dr. Xin Liu and Dr. Qingyu Yin
  • Previously a visiting Ph.D. student at Johns Hopkins University’s Center for Language and Speech Processing, supervised by Prof. Daniel Khashabi
  • Former Applied Scientist Intern at Amazon Search Experience Science in Palo Alto, collaborating with Dr. Limeng Cui, Dr. Xin Liu, and Dr. Chen Luo
  • Research interests center on large language models (LLMs), including: data-efficient RL training infrastructure for LLMs on math and agentic tasks; unlocking creative and generalizable System II reasoning in LLMs via conceptualization and metaphysical reasoning; human behavior understanding and intention modeling in e-commerce (FolkScope, IntentionQA, MIND) and social media (MIKO); and scientific knowledge exploration using LLMs (arXiv2Table, Science Hierarchography, ClaimCheck)