Published multiple papers on large language models across various fields including chemistry, recommender systems, and data science, such as 'Large Language Model based Multi-Agents: A Survey of Progress and Challenges' in IJCAI'24 and 'What can Large Language Models do in chemistry? A comprehensive benchmark on eight tasks.' in NeurIPS 2023.
Research Experience
Currently a third-year PhD student at the University of Notre Dame, previous research experience not detailed.
Education
PhD in Computer Science and Engineering from the University of Notre Dame, advised by Prof. Xiangliang Zhang, starting in Spring 2023; M.S. in Computer Science from King Abdullah University of Science and Technology (KAUST); B.Eng. in Software Engineering from Xidian University.
Background
Research interests: Developing intelligent systems with wider and deeper reasoning abilities. This includes model-level scaling up (Mixture-of-experts, Multi-agents) for divergent thinking and collective intelligence, and data-level scaling up (Test-Time Training). For deeper, exploring model-environment/tool scaling up (Multi-turn RL) for long-horizon reasoning and Thinking Paradigms scaling up (Inductive/Abductive/Deductive). Practical applications encompass LLM for Data Science (code generation, hyperparameter optimization, etc), Recommender Systems, and Scientific domains.