Presented three papers at ACL 2025: DREsS, CSRT, and CSCL
Two papers accepted at NeurIPS 2025: BenchHub (Efficient Reasoning) and Language Confusion in Code-Switching (LLM-Eval)
Paper 'Shared Heritage, Distinct Writing' accepted at IJCNLP-AACL 2025
Organizer of MELT Workshop at COLM 2025
First-authored paper 'Code-Switching Curriculum Learning for Multilingual Transfer in LLMs' published in ACL Findings 2025, introducing CSCL—a curriculum learning method inspired by human second-language acquisition that improves cross-lingual transfer in LLMs for Korean, Japanese, and Indonesian
Proposed Code-Switching Red-Teaming (CSRT) for joint evaluation of multilingual understanding and safety in LLMs
Research Experience
Visiting scholar at New York University (NYU) since September 2025 (6-month appointment)
Former lecturer at Boostcamp AI Tech, supported by NAVER Connect Foundation
Internships at:
- NAVER AI Lab
- Upstage
- KEPCO Research Institute
- CSIRO
Background
Researcher in machine learning (ML) and natural language processing (NLP)
Ultimate goal: reducing communication disparities across languages and cultures using large language models (LLMs)
Focuses on inclusive LLMs through:
- Advancing multilingual language modeling
- Developing resources and evaluation methods for low-resource languages
- Innovating LLM-driven education to support English as a Foreign Language (EFL) learning