DNNs May Determine Major Properties of Their Outputs Early, with Timing Possibly Driven by Bias (2025)
Probabilistic Language-Image Pre-Training (2025)
Rotary position embedding for vision transformer (2024)
SeiT++: Masked Token Modeling Improves Storage-efficient Training (2024)
Similarity of neural architectures using adversarial attack transferability (2024)
Research Experience
Worked as a Research Scientist at NAVER AI Lab from 2022 to 2025.
Education
Received Ph.D. and M.S. in Integrated Technology from Yonsei University in 2022, supervised by Hyunjung Shim; B.S. in Integrated Technology also from Yonsei University.
Background
Research interests include understanding how deep neural networks perceive and process diverse visual concepts to enhance structured visual representations for real-world applications. Specializes in text-to-image generative models and visual representation learning.