Browse publications on Google Scholar (top-right) ↗
Resume (English only)
Academic Achievements
"Speculative RAG: Enhancing Retrieval Augmented Generation through Drafting" accepted by ICLR 2025, achieving state-of-the-art performance in accuracy and efficiency for RAG.
"OFFICEBENCH: Benchmarking Language Agents across Multiple Applications for Office Automation" new paper alert!
"Debug like a Human: A Large Language Model Debugger via Verifying Runtime Execution Step-by-step" accepted by ACL 2024 Findings!
"Chain-of-Table: Evolving Tables in the Reasoning Chain for Table Understanding" accepted by ICLR 2024!
"A Study on Robustness and Reliability of Large Language Model Code Generation" received by AAAI 2024!
Research Experience
Conducted research at Amazon Foundation Model, Google Cloud AI, Google DeepMind, Google Research, Adobe Research, and Microsoft Research Asia.
Education
Ph.D. in Computer Science and Engineering at UC San Diego, advised by Prof. Jingbo Shang; B.S. in Computer Science from Peking University, advised by Prof. Xiaojun Wan.
Background
Research interests span several areas of natural language processing, including reasoning, information extraction, multimodal learning, and language modeling. Focuses on LLM post-training, aiming to align LLMs with complex reasoning and planning for knowledge-intensive queries, autonomous agents, and mathematical problem-solving. Earlier research focused on visually-rich document understanding.