Qianyu He
Scholar

Qianyu He

Google Scholar ID: X4l87TgAAAAJ
Fudan University
Large Language ModelReasoningInstruction FollowingCreative Generation
Citations & Impact
All-time
Citations
614
 
H-index
10
 
i10-index
10
 
Publications
20
 
Co-authors
0
 
Resume (English only)
Academic Achievements
  • Published two papers about enhancing instruction following at ACL 2025; Developed Enigmata and KORGym platforms for improving LLMs' puzzle reasoning skills; Contributed to projects like Seed-Thinking-v1.5 and Doubao-1.5-pro-AS1-Preview; Paper 'Think Thrice Before You Act: Progressive Thought Refinement in Large Language Models' accepted by ICLR 2025; Created CELLO benchmark for instruction following evaluation.
Research Experience
  • Research Intern at ByteDance Seed-LLM-Horizon from November 2024 to present, focusing on Reasoning Model, Long Chain-of-thought; Research Intern at StepFun Foundation Model Group from May 2024 to October 2024, working on LLMs Reasoning, Generative Reward Model; Student Research Leader at Knowledge Works Lab, Fudan University since March 2021, leading over 10 undergraduate and graduate students in Instruction Following, LLMs Reasoning, and Creative Generation.
Education
  • PhD in CS, 2021-2026 (estimated), Fudan University; B.S. in CS, 2017-2021, Fudan University.
Background
  • Currently a fourth-year PhD candidate at Fudan University's School of Computer Science. Her research interests primarily focus on enhancing the fundamental reasoning and instruction following capabilities of large language models (LLMs), including Reasoning Model and Instruction Following.
Miscellany
  • Hobby: Dancing
Co-authors
0 total
Co-authors: 0 (list not available)