Published multiple papers, including 'When Format Changes Meaning: Investigating Semantic Inconsistency of Large Language Models' (EMNLP 2025 Findings), 'Neural ODE Transformers: Analyzing Internal Dynamics and Adaptive Fine-tuning' (ICLR 2025), 'Balanced Domain Randomization for Safe Reinforcement Learning' (Applied Sciences, 2024), 'Impact of Co-occurrence on Factual Knowledge of Large Language Models' (EMNLP 2023 Findings), and more.
Research Experience
Conducts research at the Statistical Artificial Intelligence Lab (SAIL) at KAIST, focusing on natural language processing and explainable AI.
Education
Ph.D. candidate at the Graduate School of AI, KAIST, advisor: Jaesik Choi, affiliated with the Statistical Artificial Intelligence Lab (SAIL).
Background
Research interests: natural language processing (NLP) and explainable AI (XAI), with a particular interest in how knowledge is encoded, processed, and utilized for reasoning in language models, as well as how it can be effectively extracted. Work spans two key directions: (1) analyzing the internal knowledge representations and mechanisms of language models, and (2) enhancing models by controlling or augmenting knowledge.
Miscellany
Links to CV, Google Scholar, GitHub, and LinkedIn are available on the personal website.