Wenda Xu
Scholar

Wenda Xu

Google Scholar ID: hUh7qCcAAAAJ
Google
LLM EvaluationLLM Alignment
Citations & Impact
All-time
Citations
957
 
H-index
12
 
i10-index
15
 
Publications
20
 
Co-authors
0
 
Publications
1 items
Resume (English only)
Academic Achievements
  • - Publications: 'Speculative Knowledge Distillation' accepted at ICLR 2025.
  • - Awards: SEScore1&2 and InstructScore recognized as the best unsupervised metric at the WMT22 shared task.
Research Experience
  • - Google Translate Research: Research Scientist, April 2025 - Present. Research on automatic benchmark construction, self-bias in LLM-generated benchmarks, and length bias of translation evaluation metrics.
  • - Google Cloud AI Research: June 2024 - October 2024. Mentors: Chen-Yu Lee, Rishabh Agarwal. Worked on speculative knowledge distillation.
  • - Google Translate Research: June 2023 - December 2023. Mentor: Markus Freitag. Developed an inference-time optimization technique to iteratively refine PaLM 2 outputs.
  • - ByteDance AI Lab: June 2022 - October 2022. Mentor: Mingxuan Wang. Developed a learned evaluation metric without human labels, SESCORE2.
Education
  • Ph.D. in Computer Science from the University of California, Santa Barbara. Advisors: Prof. William Wang and Prof. Lei Li.
Background
  • Research Interests: Improving large language models (LLMs) through rigorous evaluation and efficient post-training. Actively developing automated methods to assess model capabilities, including efficient data curation techniques for building challenge benchmarks. Designing unsupervised and explainable evaluation metrics. Background also includes developing efficient post-training techniques, from preference learning with BPO to knowledge distillation with Speculative KD.
Miscellany
  • Personal interests not provided
Co-authors
0 total
Co-authors: 0 (list not available)