Soyeong Jeong
Scholar

Soyeong Jeong

Google Scholar ID: 0wnquCEAAAAJ
Ph.D. student at KAIST
NLPRAGIRLLMs
Citations & Impact
All-time
Citations
640
 
H-index
12
 
i10-index
12
 
Publications
20
 
Co-authors
19
list available
Resume (English only)
Academic Achievements
  • “When Thoughts Meet Facts: Reusable Reasoning for Long-Context LMs” (First author, Under Review)
  • “Adaptive Multi-Agent Response Refinement in Conversational Systems” (First author, Under Review)
  • “UniversalRAG: Retrieval-Augmented Generation over Multiple Corpora with Diverse Modalities and Granularities” (Co-author, Under Review)
  • “PRISM: Fine-Grained Paper-to-Paper Retrieval with Multi-Aspect-Aware Query Optimization” (Co-author, Under Review)
  • “Database-Augmented Query Representation for Information Retrieval” (First author, EMNLP 2025, Oral)
  • “The RAG Paradox: A Black-Box Attack Exploiting Unintentional Vulnerabilities in Retrieval-Augmented Generation Systems” (Co-author, Findings of EMNLP 2025)
  • “CaMMT: Benchmarking Culturally Aware Multimodal Machine Translation” (Co-author, Findings of EMNLP 2025)
  • “Upcycling Candidate Tokens of Large Language Models for Query Expansion” (Co-author, CIKM 2025)
  • “VideoRAG: Retrieval-Augmented Generation over Video Corpus” (First author, Findings of ACL 2025)
  • “EXIT: Context-Aware Extractive Compression for Enhancing Retrieval-Augmented Generation” (Co-author, Findings of ACL 2025)
  • “Temporal Information Retrieval via Time-Specifier Model Merging” (Co-author, KnowFM @ ACL 2025)
  • “Unified Multi-Modal Interleaved Document Representation for Information Retrieval” (Co-author, VecDB @ ICML 2025)
  • “Lossless Acceleration of Large Language Models with Hierarchical Drafting based on Temporal Locality in Speculative Decoding” (Co-author, Findings of NAACL 2025)
Background
  • Ph.D. student at KAIST
  • Member of MLAI Lab, advised by Sung Ju Hwang
  • Main research interests: Retrieval-Augmented Generation (RAG) for open-domain language tasks
  • Interpretability of Large Language Models (LLMs) to enhance real-world applicability
  • Broad interest in natural language understanding topics