Published several papers in top-tier conferences and journals, including ACSAC 2025, USENIX Security 2025, NACCL 2025 Findings, KDD 2025, etc. Specific paper titles include 'MoEvil: Poisoning Expert to Compromise the Safety of Mixture-of-Experts LLMs', 'Refusal Is Not an Option: Unlearning Safety Alignment of Large Language Models', etc.
Research Experience
Published papers in multiple international conferences on the security of large language models, data-driven security systems, and cybersecurity.
Education
Ph.D. Candidate in Electrical Engineering at KAIST, Network & System Security (NSS) Lab, Advisor: Prof. Seungwon Shin
Background
Research interests include the safety and security of large language models (LLMs) and data-driven security systems. Proactively red-teams AI-driven systems to uncover vulnerabilities and develops defensive methodologies that ensure robust and safe deployment. Investigates security and safety challenges in emerging AI paradigms such as unlearning, agentic systems, and mixture-of-experts (MoE) architectures. Leverages AI to enhance the effectiveness of cybersecurity domains, such as illicit drug detection and credential stuffing risk prediction.
Miscellany
Contact information includes email, CV download link, Google Scholar profile, and LinkedIn profile.