Alisa Liu
Scholar

Alisa Liu

Google Scholar ID: 3-lTFAwAAAAJ
University of Washington
natural language processingartificial intelligence
Citations & Impact
All-time
Citations
5,067
 
H-index
14
 
i10-index
15
 
Publications
20
 
Co-authors
5
list available
Resume (English only)
Academic Achievements
  • Published 'Broken Tokens? Your Language Model can Secretly Handle Non-Canonical Tokenizations' at NeurIPS 2025 (Spotlight)
  • Published 'SuperBPE: Space Travel for Language Models' at COLM 2025
  • Published 'Tuning Language Models by Proxy' at COLM 2024 (Spotlight, top 7%)
  • Published 'We're Afraid Language Models Aren't Modeling Ambiguity' at EMNLP 2023
  • Co-authored 'Self-Instruct: Aligning Language Models with Self-Generated Instructions' at ACL 2023
  • Co-authored 'Detoxifying Text with MaRCo: Controllable Revision with Experts and Anti-Experts' at ACL 2023
  • First-authored 'WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation' at EMNLP Findings 2022
  • Multiple publications at top venues including NAACL 2025, ICML 2024, TMLR 2023, ACL Findings 2025, and NeurIPS 2024
Background
  • Final-year PhD student in Computer Science at the University of Washington
  • Research focuses on natural language processing, particularly tokenization, decoding-time algorithms, and data creation
  • Advised by Yejin Choi and Noah Smith
  • Supported by the NSF Graduate Research Fellowship and OpenAI SuperAlignment Fellowship
  • On the job market for 2026