Jane Yu
Scholar

Jane Yu

Google Scholar ID: ev8Ilx0AAAAJ
OpenAI
Citations & Impact
All-time
Citations
6,329
 
H-index
20
 
i10-index
33
 
Publications
20
 
Co-authors
32
list available
Resume (English only)
Academic Achievements
  • Toolformer: Language Models Can Teach Themselves to Use Tools, NeurIPS 2023 [Oral Presentation]
  • ROBBIE: Robust Bias Evaluation of Large Generative Language Models, EMNLP 2023
  • Active Retrieval Augmented Generation, EMNLP 2023
  • Active Learning Principles for In-Context Learning with Large Language Models, EMNLP Findings 2023
  • Augmented Language Models: a Survey, TMLR 2023
  • Atlas: Few-shot Learning with Retrieval Augmented Language Models, JMLR 2023
  • NormBank: A Knowledge Bank of Situational Social Norms, ACL 2023
  • TimelineQA: A Benchmark for Question Answering over Timelines, ACL Findings 2023
  • Learnings from Data Integration for Augmented Language Models
  • Improving Wikipedia Verifiability with AI, Nature Machine Intelligence 2023
  • Using Comments for Predicting the Affective Response to Social Media Posts, ACII 2023
  • Consequences of Conflicts in Online Conversations, ICWSM 2024 (Under Review)
  • Selective whole-genome amplification reveals population genetics of Leishmania braziliensis directly from patient skin biopsies
Research Experience
  • Works at FAIR (Meta) focusing on enhancing the reasoning capabilities of large language models.
Education
  • Completed her Ph.D. in 2019 at UC Berkeley, focusing on computational tools for immune repertoire characterization and primer set design, advised by Professor Yun S. Song; received her Bachelor of Arts and Sciences in both Computer Science and Chemistry in 2014 from Cornell University.
Background
  • A researcher at FAIR in Meta, working on improving the reasoning capabilities of large language models. Before that, she was a Ph.D. student in the EECS department at the University of California, Berkeley.