Published two papers about enhancing instruction following at ACL 2025; Developed Enigmata and KORGym platforms for improving LLMs' puzzle reasoning skills; Contributed to projects like Seed-Thinking-v1.5 and Doubao-1.5-pro-AS1-Preview; Paper 'Think Thrice Before You Act: Progressive Thought Refinement in Large Language Models' accepted by ICLR 2025; Created CELLO benchmark for instruction following evaluation.
Research Experience
Research Intern at ByteDance Seed-LLM-Horizon from November 2024 to present, focusing on Reasoning Model, Long Chain-of-thought; Research Intern at StepFun Foundation Model Group from May 2024 to October 2024, working on LLMs Reasoning, Generative Reward Model; Student Research Leader at Knowledge Works Lab, Fudan University since March 2021, leading over 10 undergraduate and graduate students in Instruction Following, LLMs Reasoning, and Creative Generation.
Education
PhD in CS, 2021-2026 (estimated), Fudan University; B.S. in CS, 2017-2021, Fudan University.
Background
Currently a fourth-year PhD candidate at Fudan University's School of Computer Science. Her research interests primarily focus on enhancing the fundamental reasoning and instruction following capabilities of large language models (LLMs), including Reasoning Model and Instruction Following.