Published several papers, such as 'Self-Training Large Language Models with Confident Reasoning' and 'Pessimistic Backward Policy for GFlowNets'; received honors and awards including Postechian Fellowship (2024) and Presidential Science Scholarship (2024).
Research Experience
Involved in multiple research projects, including self-training large language models with confident reasoning, and learning energy decomposition for partial inference in GFlowNets.
Ph.D. student at KAIST, focusing on various types of generative models including large language models (LLMs), Generative Flow Networks (GFlowNets), and diffusion probabilistic models (DPMs) for inferring diverse solutions. Also interested in machine learning for medical and biological domains.
Miscellany
Invited to give talks at venues such as UMICH, ICLR, and MILA.