Training Proactive and Personalized LLM Agents (Preprint)
Scaling Long-Horizon LLM Agent via Context-Folding (Preprint)
Scaling LLM Multi-turn RL with End-to-end Summarization-based Context Management (Preprint)
ReasonRank: Empowering Passage Ranking with Strong Reasoning Ability (Preprint)
CoMind: Towards Community-Driven Agents for Machine Learning Engineering (MTI-LLM@NeurIPS 2025)
Enhancing Training Data Attribution with Representational Optimization (NeurIPS 2025 Spotlight)
FrontierCO: A Comprehensive Evaluation of Contemporary ML-Based Solvers for Combinatorial Optimization (Preprint)
CO-Bench: Benchmarking Language Model Agents in Algorithm Search for Combinatorial Optimization (Preprint)
CodePDE: An Inference Framework for LLM-driven PDE Solver Generation (Preprint)
ZeroGR: A Generalizable and Scalable Framework for Zero-Shot Generative Retrieval (Preprint)
Direct Retrieval-augmented Optimization: Synergizing Knowledge Selection and Language Models (Preprint)
Improving Retrieval-Augmented Generation through Multi-Agent Reinforcement Learning (NeurIPS 2025)
MAIR: A Massive Benchmark for Evaluating Instructed Retrieval (EMNLP 2024)
TourRank: Utilizing Large Language Models for Documents Ranking with a Tournament-Inspired Strategy (WWW 2025)
MEFT: Memory-Efficient Fine-Tuning through Sparse Adapter (ACL 2024)
Generate-then-Ground in Retrieval-Augmented Generation for Multi-hop Question Answering (ACL 2024)
Enhanced Generative Recommendation via Content and Collaboration Integration (CIKM 2024)
Improving the Robustness of Large Language Models via Consistency Alignment (LREC-Coling 2024)
How Large Language Models Encode Context Knowledge? A Layer-Wise Probing Study (Incomplete Title)
Research Experience
Conducting PhD research at Carnegie Mellon University, involving multiple research projects on long-horizon LLM agents, scientific reasoning, and information retrieval.
Education
PhD Student: Language Technologies Institute, Carnegie Mellon University, Advisor: Yiming Yang; M.E. and B.E.: Shandong University, Advisor: Zhaochun Ren.
Background
Research Interests: LLM agents for long-horizon, scientific reasoning, information retrieval. Field: Natural Language Processing.
Miscellany
Contact: Email, Twitter, LinkedIn, Google Scholar, GitHub