John Thickstun
Scholar

John Thickstun

Google Scholar ID: RkuzIZMAAAAJ
Assistant Professor, Cornell University
Machine LearningGenerative ModelsMusic TechnologyNatural Language Processing
Citations & Impact
All-time
Citations
2,700
 
H-index
15
 
i10-index
17
 
Publications
20
 
Co-authors
19
list available
Resume (English only)
Academic Achievements
  • - Published paper 'Robust distortion-free watermarks for language models' in TMLR
  • - Developed AI music creation tool Aria
  • - Wrote blog posts on co-composing music using the Anticipatory Music Transformer
  • - Responded to NSF’s Request for Information on the White House’s Development of an AI Action Plan
Research Experience
  • - Assistant Professor at Cornell University (starting Fall 2024)
  • - Previously a Postdoctoral Scholar at Stanford University
  • - Involved in multiple research projects such as Anticipatory Music Transformer, Watermarking LLMs, etc.
Education
  • - Postdoctoral Scholar at Stanford University, advised by Percy Liang
  • - Ph.D. in the Allen School of Computer Science & Engineering at the University of Washington, co-advised by Sham Kakade and Zaid Harchaoui
  • - Undergraduate in Applied Mathematics at Brown University, advised by Eugene Charniak and Björn Sandstede
Background
  • Research interests include machine learning and generative models. Focuses on methods that control the behavior of models, both from the perspective of a user who hopes to use a model to accomplish concrete tasks, and from the perspective of a model provider or policymaker who hopes to broadly regulate the outputs of a model. Also interested in applications of generative models that push beyond the standard text and image modalities, including music technologies.
Miscellany
  • - Has a strong interest in music technology and has developed related tools and techniques