Jingfeng Yang
Scholar

Jingfeng Yang

Google Scholar ID: hysBvrwAAAAJ
xAI
Large Language ModelsLanguage AgentsAlignmentAI SafetyNatural Language Processing
Citations & Impact
All-time
Citations
2,208
 
H-index
17
 
i10-index
20
 
Publications
20
 
Co-authors
23
list available
Publications
20 items
Browse publications on Google Scholar (top-right) ↗
Resume (English only)
Academic Achievements
  • Published 'Harnessing the Power of LMs in Practice: A Survey on ChatGPT and Beyond' in TKDD 2024; Co-authored 'SEQZERO: Few-shot Compositional Semantic Parsing with Sequential Prompts and Zero-shot Models', which was included in NAACL' 2022 (Findings).
Research Experience
  • Conducted research at Google and Microsoft; Currently a founding member and scientist at Amazon Generative Foundational AI, building LLMs from scratch (e.g., Rufus). Main areas of work include: 1) Pretraining (data, infrastructure, scaling laws) 2) Post-training (instruction tuning, human and AI preference learning) 3) Evaluation 4) Language agents (tool using, planning and reasoning, long-context handling) 5) Alignment and AI safety.
Education
  • Degrees in Computer Science; Graduated from Georgia Tech and Peking University; Advisor was Prof. Diyi Yang, currently at Stanford University.
Background
  • Research Interests: Building and applying large language models (LLMs), including pretraining, post-training, evaluation, and language agents. Areas of expertise: Natural Language Processing, Machine Learning, Multi-modal Deep Learning.
Miscellany
  • Blog topics cover a range of subjects including LLM capabilities vs. alignment, AI safety, reproduction and usage of GPT-3/ChatGPT, among others.