Published multiple papers in the fields of NLP and ML, including 'DRAMA: Diverse Augmentation from Large Language Models to Smaller Dense Retrievers' (ACL 2025), 'FLAME🔥: Factuality-Aware Alignment for Large Language Models' (NeurIPS 2024), etc.
Research Experience
Worked as a Research Scientist at Meta's Fundamental AI Research (FAIR). PhD research focused on learning deep representations for low-resource / zero-resource cross-lingual model transfer.
Education
PhD in Computer Science from Cornell University in 2019, advised by Prof. Claire Cardie; Undergraduate study at Shanghai Jiao Tong University.
Background
Research interests: Natural Language Processing and Machine Learning, particularly the interplay between knowledge and language. Worked as a Research Scientist at Meta's Fundamental AI Research (FAIR).