Junlong Li
Scholar

Junlong Li

Google Scholar ID: UX7TpSYAAAAJ
Shanghai Jiao Tong University
Natural Language Processing
Citations & Impact
All-time
Citations
7,088
 
H-index
15
 
i10-index
16
 
Publications
16
 
Co-authors
0
 
Publications
16 items
Browse publications on Google Scholar (top-right) ↗
Resume (English only)
Academic Achievements
  • Publications:
  • - The Tool Decathlon: Benchmarking Language Agents for Diverse, Realistic, and Long-Horizon Task Execution (Preprint, 2025)
  • - CodeI/O: Condensing Reasoning Patterns via Code Input-Output Prediction (ICML 2025 Oral)
  • - Diving into Self-Evolving Training for Multimodal Reasoning (ICML 2025)
  • - Programming Every Example: Lifting Pre-training Data Quality like Experts at Scale (ICML 2025)
  • - Dissecting Human and LLM Preferences (ACL 2024)
  • - Reformatted Alignment (EMNLP 2024, Findings)
  • - Extending LLMs' Context Window with 100 Samples (Preprint, 2024)
  • - The Critique of Critique (ACL 2024, Findings)
  • - Generative Judge for Evaluating Alignment (ICLR 2024)
  • - Generative AI for Math: Abel (Preprint, 2023)
  • - Self-prompted Chain-of-Thought on Large Language Models for Open-domain Multi-hop Reasoning (EMNLP 2023, Findings)
  • - Self-Prompting Large Language Models for Zero-Shot Open-Domain QA (NAACL 2024)
  • - Dialogue-adaptive language model pre-training from quality estimation (Neurocomputing, 2022)
  • - DiT: Self-supervised pre-training for document image transformer (ACM Multimedia 2022)
  • - Markuplm: Pre-training of text and markup language for visually-rich document understanding (ACL 2022)
  • - Multi-turn dialogue reading comprehension with pivot turns and knowledge (TASLP, 2021)
Research Experience
  • Research Intern at Microsoft Research Asia in the NLC group, guided by Dr. Lei Cui, working on several interesting research topics related to Document AI, including webpage understanding and document Image foundation models.
  • May 2023 to Feb 2024, worked closely with Prof. Pengfei Liu at GAIR on various aspects of Large Language Models (LLMs), primarily focusing on the evaluation and alignment of LLMs.
  • Before entering HKUST, also spent some time on DeepSeek LLM Alignment Team as a research intern, conducting research on topics related to general reasoning under the supervision of Dr. Yu Wu.
Education
  • 2025.09 - 2028.6 (expected), Ph.D. in Computer Science & Engineering, HKUST, Advisor: Prof. Junxian He
  • 2022, B.S. in Computer Science, IEEE Honor Class, Shanghai Jiao Tong University, Advisor: Prof. Hai Zhao
  • Master's degree in Computer Science, Shanghai Jiao Tong University, Advisor: Prof. Hai Zhao
Background
  • Research Interests: Document AI, Evaluation and Alignment of Large Language Models
  • Field: Computer Science
  • Introduction: Currently a Ph.D. student in Computer Science & Engineering at HKUST, supervised by Prof. Junxian He. Previously, he was a research intern at Microsoft Research Asia in the NLC group, working on topics related to webpage understanding and document Image foundation models.
Co-authors
0 total
Co-authors: 0 (list not available)