DongGeon Lee
Scholar

DongGeon Lee

Google Scholar ID: MQKVYV8AAAAJ
POSTECH
Natural Language ProcessingLarge Language ModelsResponsible AIAI Safety
Citations & Impact
All-time
Citations
9
 
H-index
1
 
i10-index
0
 
Publications
14
 
Co-authors
11
list available
Resume (English only)
Academic Achievements
  • - Paper 'COMPASS: A Framework for Evaluating Organization-Specific Policy Alignment in LLMs' under review
  • - Paper 'Are Vision-Language Models Safe in the Wild? A Meme-Based Benchmark Study' accepted at EMNLP 2025 Main
  • - Paper 'When Good Sounds Go Adversarial: Jailbreaking Audio-Language Models with Benign Inputs' preprint
  • - Paper 'REFIND at SemEval-2025 Task 3: Retrieval-Augmented Factuality Hallucination Detection in Large Language Models' accepted at SemEval @ ACL 2025
  • - Paper 'Everyday Physics in Korean Contexts: A Culturally Grounded Physical Reasoning Benchmark' accepted at MRL @ EMNLP 2025
  • - Paper 'Global PIQA: Evaluating Physical Commonsense Reasoning Across 100+ Languages and Cultures' preprint
  • - Paper 'Synthesizing a Korean-centric Math Corpus to Enhance Math Problem-Solving Ability in Korean Large Language Models' accepted at HCLT 2025 (Domestic)
  • - Paper 'Typed-RAG: Type-Aware Decomposition of Non-Factoid Questions for Retrieval-Augmented Generation' accepted at XLLM @ ACL 2025 | NAACL 2025 SRW (Non-Archival)
  • - Won the Excellent Paper Award from HCLT 2025
  • - Won the Excellent Paper Award from HCLT 2024
  • - Won the gold prize in the Korean AI Language Proficiency Challenge held by NIKL
  • - Received the Top Engineering Student Award from Inha University
Research Experience
  • - M.S. Student, Graduate School of Artificial Intelligence, POSTECH, Feb. 2024 - Present
  • - Research Intern at KT, Winter 2025
Education
  • - M.S. in Artificial Intelligence, Pohang University of Science and Technology (POSTECH), Advisor: Prof. Hwanjo Yu, Feb. 2024 - Present
  • - B.S. in Information & Communication Engineering, Inha University, Mar. 2018 - Feb. 2024
Background
  • An AI safety researcher focusing on data-centric NLP and LLM safety, including evaluations, guardrails, and automated red teaming. Aiming to build safer, more reliable AI with data and LLMs.
Miscellany
  • Open to collaborations on responsible AI across academia and industry