Kezhi Kong
Scholar

Kezhi Kong

Google Scholar ID: MG46jrMAAAAJ
NVIDIA
Machine Learning
Citations & Impact
All-time
Citations
1,176
 
H-index
12
 
i10-index
13
 
Publications
19
 
Co-authors
16
list available
Resume (English only)
Academic Achievements
  • 1. OpenTab: Advancing Large Language Models as Open-domain Table Reasoners, ICLR, 2024
  • 2. On the Reliability of Watermarks for Large Language Models, ICLR, 2024
  • 3. GOAT: A Global Transformer on Large-scale Graphs, ICML, 2023
  • 4. Robust Optimization as Data Augmentation for Large-scale Graphs, CVPR, 2022
  • 5. VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using Vector Quantization, NeurIPS, 2021
  • 6. A Closer Look at Distribution Shifts and Out-of-Distribution Generalization on Graphs, DistShift Workshop @ NeurIPS (Spotlight), 2021
  • 7. GradInit: Learning to Initialize Neural Networks for Stable and Efficient Training, NeurIPS, 2021
  • 8. Data Augmentation for Meta-Learning, ICML, 2021
  • 9. SHOT-VAE: Semi-supervised Deep Generative Models With Label-aware ELBO Approximations, AAAI, 2021
Research Experience
  • Research Scientist at NVIDIA, Foundation Model Team.
Education
  • PhD: Department of Computer Science, University of Maryland, College Park, Advisor: Prof. Tom Goldstein; Bachelor's: Zhejiang University, Advisor: Prof. Wei Chen.
Background
  • Research Interests: Machine Learning, Large Language Models, Graph Representation Learning, Trustworthy ML. Currently a Research Scientist at NVIDIA, focusing on the foundation model team under Applied Deep Learning Research (ADLR).
Miscellany
  • Personal interests and hobbies not mentioned.