Xinran Zhao
Scholar

Xinran Zhao

Google Scholar ID: iX71amEAAAAJ
PhD Student, Carnegie Mellon University
Natural Language ProcessingMachine Learning
Citations & Impact
All-time
Citations
2,382
 
H-index
12
 
i10-index
13
 
Publications
20
 
Co-authors
52
list available
Resume (English only)
Academic Achievements
  • 1. Improving Large Language Model Planning with Action Sequence Similarity (ICLR 2025).
  • 2. SPHERE: An Evaluation Card for Human-AI Systems (Findings of ACL 2025).
  • 3. cAST: Enhancing Code Retrieval-Augmented Generation with Structural Chunking via Abstract Syntax Tree (Findings of EMNLP 2025).
  • 4. Strategic Planning and Rationalizing on Trees Make LLMs Better Debaters (In Submission).
  • 5. MoR: Better Handling Diverse Queries with a Mixture of Sparse, Dense, and Human Retrievers (To appear in Proceedings of EMNLP 2025).
  • 6. Revela: Dense Retriever Learning via Language Modeling (In Submission).
  • 7. Pivot-ICL: Adaptive Exemplar Selection for In-Context Learning (In Submission).
  • 8. The Ramon Llull's Thinking Machine for Automated Ideation (To appear in LM4Sci@COLM 2025).
  • 9. Dense X Retrieval: What Retrieval Granularity Should We Use? (EMNLP 2024).
  • 10. MixGR: Enhancing Retriever Generalization for Scientific Domain through Complementary Granularity (EMNLP 2024).
  • 11. HiMemFormer: Hierarchical Memory-Aware Transformer for Multi-Agent Action Anticipation (Video-Language Models @ NeurIPS 2024).
  • 12. Beyond Relevance: Evaluate and Improve Retrievers on Perspective Awareness (In Submission).
Research Experience
  • 1. Spent a wonderful summer 2024 at Google DeepMind as a student researcher with Dr. Azade Nova and Dr. Hanie Sedghi.
  • 2. Mentors also include Dr. Shikhar Murty, Dr. Hongming Zhang, and Dr. Esin Durmus.
Education
  • 1. PhD student at CMU LTI, advised by Prof. Sherry Wu.
  • 2. MSCS at Stanford University, worked with Prof. Christopher Manning.
  • 3. Bachelors at HKUST, also exchanged at Cornell, worked with Prof. Yangqiu Song, Prof. Dit-Yan Yeung, and Prof. Claire Cardie.
Background
  • Research interests include: few-shot learning, commonsense reasoning, coreference resolution, knowledge graph, and argument mining. The goal is to build efficient, effective, and faithful systems to tackle the great ambiguity of natural languages.
Miscellany
  • Love to discuss new ideas related to or beyond these topics. Do email me to start a chat.