Weixin Liang
Scholar

Weixin Liang

Google Scholar ID: 7z9P1jYAAAAJ
Stanford Univerisity
Large Language Model (LLM)Societal impact of LLMMulti-modal LLMMixture-of-Experts
Citations & Impact
All-time
Citations
3,849
 
H-index
24
 
i10-index
32
 
Publications
20
 
Co-authors
18
list available
Resume (English only)
Academic Achievements
  • Paper 'Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews' accepted as Oral (top 5%) at ICML 2024
  • Runner-up for Best Presentation Award at ICSSI 2024 (International Conference on the Science of Science and Innovation), held at the National Academy of Sciences
  • Paper 'Mapping the Increasing Use of LLMs in Scientific Papers' published at COLM 2024
  • arXiv preprint 'The Widespread Adoption of Large Language Model-Assisted Writing Across Society' (2025) provides the first large-scale measurement of LLM-assisted writing in public texts
  • Research featured in over 300 global media outlets including Nature, The New York Times, Scientific American, The Guardian, and Fortune
Background
  • Ph.D. student in Computer Science at Stanford University
  • Member of the Stanford Artificial Intelligence Laboratory (SAIL)
  • Advised by Prof. James Zou
  • Research focuses on the societal impact and responsible use of Large Language Models (LLMs)
  • Developed novel methodologies to measure and quantify the widespread adoption of LLMs in UN communications, corporate press releases, job postings, consumer complaints, academic publications, and peer reviews