Investigating Counterfactual Unfairness in LLMs towards Identities through Humor (Under Review)
Subtle Risks, Critical Failures: A Framework for Diagnosing Physical Safety of LLMs for Embodied Decision Making (EMNLP 2025 Main)
Multimodal UNcommonsense: From Odd to Ordinary and Ordinary to Odd (Published in EMNLP 2025)
G1yphD3c0de: Towards Safer Language Models on Visually Perturbed Texts (Published in COLM 2025)
Mind the Motions: Benchmarking Theory-of-Mind in Everyday Body Language (Preprint)
Reading Books is Great, But Not if You Are Driving! Visually Grounded Reasoning about Defeasible Commonsense Norms (Published in EMNLP 2023)
Research Experience
Conducting research in multimodal learning and social-aware & human-centered AI at Yonsei University.
Education
Master's degree in Artificial Intelligence from Yonsei University, advised by Youngjae Yu; B.S. in Economics and Applied Statistics from Yonsei University.
Background
A Master student in Artificial Intelligence at Yonsei University, with research interests including multimodal learning, social-aware & human-centered AI, responsible AI, and pluralistic alignment. Aims to enhance AI systems' capabilities to better understand individuals' unique characteristics, contributing to developing AI that not only understands people more fully but also supports them in their everyday lives.
Miscellany
Contact: Email, Google Scholar, GitHub, LinkedIn, X (formerly Twitter)