Published a paper at NeurIPS 2025 titled 'Prismatic Synthesis: Gradient-based Data Diversification Boosts Generalization in LLM Reasoning' (Spotlight, 3.1% acceptance rate); Published a paper at ICLR 2025 titled 'AI as Humanity's Salieri: Quantifying Linguistic Creativity of Language Models via Systematic Attribution of Machine Text against Web Text' (Oral, 1.8% acceptance rate) and more.
Research Experience
Research intern at NVIDIA (2024-2025): working on reasoning (Retro-Search, Prismatic Synthesis) and pre-training data (Nemotron-H, Nemotron Nano 2); Visiting researcher at Allen Institute for AI (2022-2024): safety (WildGuard, WildJailbreak), VLM (Champagne); ML engineer at Hyperconnect (acquired by Match Group for $1.7B, 2019-2022): social chatbots + product development.
Education
Seoul National University (2017-2024); Seoul Science High School (2014-2016)
Background
First-year Ph.D. student in Computer Science at Stanford, advised by Yejin Choi. Research interests include improving language models to solve challenging scientific problems, particularly reasoning and knowledge of LLMs; interested in data and training algorithms, preferring very simple and scalable ideas.