Publications: 'DISC: Dynamic Decomposition Improves LLM Inference Scaling' (NeurIPS 2025), 'Strategist: Learning Strategic Skills by LLMs via Bi-Level Tree Search' (ICLR 2025), 'Scattered Forest Search: Smarter Code Space Optimization improves LLM Inference Scaling' (ICLR 2025); Featured in the State of AI Report 2024.
Research Experience
Visiting Researcher at Caltech since Oct 2024, Advisor: Yisong Yue; Research Intern at NEC Laboratories America from May 2024, Advisor: Wei Cheng.
Education
Ph.D. in Computer Science, Aug 2023 - Present, Rensselaer Polytechnic Institute (RPI), Advisors: Santiago Paternain and Ziniu Hu; M.S. in Financial Mathematics, Aug 2021 - Mar 2023, University of Chicago, Collaborators: Haifeng Xu and Dacheng Xiu; B.S. in Mathematics and Economics, Aug 2017 - May 2021, Reed College.
Background
Research Interests: Scalable reasoning and intelligent agents. Work Focus: Scaling the reasoning abilities of large language models and LLM agents through search, inference scaling, and reinforcement learning.
Miscellany
Personal Interest: Trained as an economist before becoming an AI researcher.