Spotlight and oral presentations at top venues like ICLR, ICML Workshops; several NeurIPS competition awards and leadership roles; published papers such as 'Speculative Decoding with multiple drafters,' 'Federated Learning with Noisy Labels,' etc.
Research Experience
Research Scientist at LG AI Research, focusing on LLMs' thinking and reasoning strategies; PhD Intern at Google Research (2023), Dynamo AI (2023), Korean National Institute of Meteorological Sciences (2022), Qualcomm AI (2021).
Education
PhD - KAIST AI, Advisor: Prof. Se-Young Yun; BS - KAIST Mathematical Science (Minor in Intellectual Property).
Background
Research Interests: Thinking and reasoning strategies for large language models (LLMs), including scalable test-time inference and model behavior optimization. Professional Field: AI, Optimization & Statistical Inference.
Miscellany
No specific information provided about personal interests or hobbies.