Jay-Yoon Lee
Scholar

Jay-Yoon Lee

Google Scholar ID: _USiaqwAAAAJ
Seoul National University
Machine LearningArtificial IntelligenceKnowledge InjectionStructured prediction
Citations & Impact
All-time
Citations
642
 
H-index
12
 
i10-index
14
 
Publications
20
 
Co-authors
11
list available
Resume (English only)
Academic Achievements
  • 1 paper accepted at EMNLP 2025!
  • 1 paper accepted at ICCV 2025 workshop!
  • 2 papers accepted at ACL 2025 (1 Oral)!
  • 1 paper accepted at ICLR 2025!
  • 1 paper accepted at COLING 2025 (Oral)!
  • 4 papers accepted at EMNLP 2024 (3 EMNLP, 1 TACL)!
  • 3 papers accepted at ACL 2024 workshop!
  • Won the Best Area Chair award for the 'Machine Learning for NLP track' at EMNLP 2023!
  • Our paper 'Machine Reading Comprehension using Case-based Reasoning' got accepted at EMNLP 2023!
Research Experience
  • Sept 2022 – current, Assistant Professor at Graduate School of Data Science, Seoul National University, Seoul, South Korea
  • July 2020 – July 2022, Postdoctoral Associate under Professor Andrew McCallum, University of Massachusetts, Amherst, MA
  • Oct 2015 – July 2020, Research Assistant under Professor Jaime Carbonell, Carnegie Mellon University, Pittsburgh, PA
  • June 2012 – Oct 2015, Research Assistant under Professor Christos Faloutsos, Carnegie Mellon University, Pittsburgh, PA
  • Oct 2019 – Jan 2020, Research Intern, Language & Speech, Google AI, New York, NY
  • June 2019 – Aug 2019, Research Intern, Deep Learning Group, Microsoft Research, Redmond, WA
  • June 2017 – Aug 2017, Research Intern, Information and Data Sciences Group, MSR & Bing, Redmond, WA
  • June 2016 – Aug 2016, Research Intern, IRML, Oracle Labs, Boston, MA
  • May 2015 – Aug 2015, Research Intern, Yahoo! Labs, Sunnyvale, CA
  • Sep 2009 – Jun 2011, Researcher, ZEROIN Corporation, Seoul, Korea
  • Jul 2008 – Aug 2009, Associate, NICE Pricing Services, INC., Seoul, Korea
  • Oct 2007 – Jan 2008, Intern, AMICUS Wireless Technology, Sunnyvale, CA
Background
  • The goal of my research is injecting knowledge/constraints into neural models, primarily for natural language processing (NLP) tasks. I am broadly interested in structured prediction, multi-task learning, logical reasoning, and better representation learning for the aforementioned topics.