Vaishnavi Shrivastava
Scholar

Vaishnavi Shrivastava

Google Scholar ID: N0nX2VsAAAAJ
MS CS, Stanford University
Natural Language ProcessingMachine LearningDeep Learning
Citations & Impact
All-time
Citations
342
 
H-index
7
 
i10-index
6
 
Publications
10
 
Co-authors
6
list available
Resume (English only)
Academic Achievements
  • Paper: 'Llamas Know What GPTs Don't Show: Surrogate Models for Confidence Estimation', arXiv, 2023
  • Paper: 'Benchmarking and Improving Generator-Validator Consistency of Language Models', ICLR, 2024
  • Paper: 'Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs', ICLR, 2024
  • Paper: 'UserIdentifier: Implicit User Representations for Simple and Effective Personalized Sentiment Analysis', NAACL, 2022
  • Paper: 'Exploring Low-Cost Transformer Model Compression for Large-Scale Commercial Reply Suggestions', arXiv, 2021
Research Experience
  • Applied Scientist at Microsoft, developing parameter-efficient NLP systems deployed to millions of users; Student Researcher at the Allen Institute for AI.
Education
  • Master's: Computer Science at Stanford University, advised by Professor Percy Liang; Bachelor's: Computer Science degree from Caltech.
Background
  • Research interests: Building trustworthy large language models capable of robust reasoning. Particularly excited about teaching models to express their uncertainty, reason consistently, perform long-horizon planning, and continually adapt to real-world signals.
Miscellany
  • Contact: Email, CV, Google Scholar, Twitter, LinkedIn, Github