Proposed the first exact measurement of language model capacity and estimated GPT models memorize 3.6 or so bits-per-parameter; built state-of-the-art text embedding model; developed the first method capable of perfectly inverting text embeddings and a follow-up which allowed to invert LLM logits back to prompts; created TextAttack, the first software package for LLM jailbreaking, as an undergraduate in 2019 and 2020.
Research Experience
A part-time student researcher at Meta (FAIR) for about a third of his PhD; previously a Google Brain Resident.
Education
PhD from Cornell University (Cornell Tech campus), advisors were Sasha Rush and Vitaly Shmatikov; undergraduate degree from the University of Virginia, specializing in NLP.
Background
Research interests include Natural Language Processing (NLP) and Machine Learning. During his PhD, he worked closely with Sasha Rush and Vitaly Shmatikov.
Miscellany
Active on Twitter at jxmnop and GitHub at jxmorris12; appeared on podcasts Latent Space and Odd Lots; started a more formal blog on Substack.