“When Thoughts Meet Facts: Reusable Reasoning for Long-Context LMs” (First author, Under Review)
“Adaptive Multi-Agent Response Refinement in Conversational Systems” (First author, Under Review)
“UniversalRAG: Retrieval-Augmented Generation over Multiple Corpora with Diverse Modalities and Granularities” (Co-author, Under Review)
“PRISM: Fine-Grained Paper-to-Paper Retrieval with Multi-Aspect-Aware Query Optimization” (Co-author, Under Review)
“Database-Augmented Query Representation for Information Retrieval” (First author, EMNLP 2025, Oral)
“The RAG Paradox: A Black-Box Attack Exploiting Unintentional Vulnerabilities in Retrieval-Augmented Generation Systems” (Co-author, Findings of EMNLP 2025)
“Upcycling Candidate Tokens of Large Language Models for Query Expansion” (Co-author, CIKM 2025)
“VideoRAG: Retrieval-Augmented Generation over Video Corpus” (First author, Findings of ACL 2025)
“EXIT: Context-Aware Extractive Compression for Enhancing Retrieval-Augmented Generation” (Co-author, Findings of ACL 2025)
“Temporal Information Retrieval via Time-Specifier Model Merging” (Co-author, KnowFM @ ACL 2025)
“Unified Multi-Modal Interleaved Document Representation for Information Retrieval” (Co-author, VecDB @ ICML 2025)
“Lossless Acceleration of Large Language Models with Hierarchical Drafting based on Temporal Locality in Speculative Decoding” (Co-author, Findings of NAACL 2025)
Background
Ph.D. student at KAIST
Member of MLAI Lab, advised by Sung Ju Hwang
Main research interests: Retrieval-Augmented Generation (RAG) for open-domain language tasks
Interpretability of Large Language Models (LLMs) to enhance real-world applicability
Broad interest in natural language understanding topics