Paper: 'UserIdentifier: Implicit User Representations for Simple and Effective Personalized Sentiment Analysis', NAACL, 2022
Paper: 'Exploring Low-Cost Transformer Model Compression for Large-Scale Commercial Reply Suggestions', arXiv, 2021
Research Experience
Applied Scientist at Microsoft, developing parameter-efficient NLP systems deployed to millions of users; Student Researcher at the Allen Institute for AI.
Education
Master's: Computer Science at Stanford University, advised by Professor Percy Liang; Bachelor's: Computer Science degree from Caltech.
Background
Research interests: Building trustworthy large language models capable of robust reasoning. Particularly excited about teaching models to express their uncertainty, reason consistently, perform long-horizon planning, and continually adapt to real-world signals.
Miscellany
Contact: Email, CV, Google Scholar, Twitter, LinkedIn, Github