Sangkug Lym
Scholar

Sangkug Lym

Google Scholar ID: -gML-RQAAAAJ
Nvidia
Citations & Impact
All-time
Citations
967
 
H-index
13
 
i10-index
14
 
Publications
20
 
Co-authors
3
list available
Resume (English only)
Research Experience
  • Sr. Deep Learning Computer Architect, NVIDIA (Redmond, WA), Jan 2019–Present: Tech lead for LLM performance optimization; SW/HW co-design for DL performance; GPU system performance engineering.
  • Graduate Research Assistant, LPH Group, UT Austin (Aug 2015–Dec 2019): ML acceleration (algorithm, SW, scheduling, HW); high-performance energy-efficient memory systems; microarchitecture-level fault injection and fault tolerance analysis.
  • Research Intern, Microsoft AI & Advanced Architecture Group (May–Aug 2019): DL model performance analysis; accelerator architecture design space exploration.
  • Deep Learning Architecture Intern, NVIDIA (Santa Clara, CA, May–Aug 2018): DL workload analysis; GPU kernel analysis for fast training.
  • Research Intern, NVIDIA Research Architecture Group (Austin, TX, May–Aug 2017): DL workload analysis; GPU memory modeling and optimization for CNN training.
  • Research Intern, Hewlett Packard Labs Platform Architecture Group (May–Aug 2016): Persistent memory architecture; memory-centric computing; DRAM cache simulator design.
  • DRAM Design & Performance Evaluation Engineer, SK hynix (Ichon, KR, Apr 2012–Jul 2015): DDR4-Extension feature development; next-gen DRAM evaluation; SK hynix representative in DDR4/DDR4-Extension JEDEC standardization.
  • PCRAM Architecture & Circuit Design Engineer, SK hynix (Ichon, KR, Dec 2007–Apr 2012): PCRAM architecture and core optimization; data interface and layout design; wafer-level functionality and SLC/MLC transition analysis.