Nate Gruver
Scholar

Nate Gruver

Google Scholar ID: R5QNdhcAAAAJ
New York University
Deep LearningGenerative ModelsAI for Science
Citations & Impact
All-time
Citations
1,631
 
H-index
12
 
i10-index
12
 
Publications
14
 
Co-authors
12
list available
Resume (English only)
Academic Achievements
  • Published multiple papers, including:
  • - Understanding the relationship between large-scale pretraining and inductive biases
  • - Generative modeling for protein and materials design
  • - Combining generative models with uncertainty estimates
  • Notable publications include:
  • - Large Language Models Must Be Taught to Know What They Don't Know (NeurIPS 2024)
  • - Fine-Tuned Language Models Generate Stable Inorganic Materials as Text (ICLR 2024)
  • - Large Language Models Are Zero Shot Time Series Forecasters (NeurIPS 2023)
  • - Protein Design with Guided Discrete Diffusion (NeurIPS 2023, Spotlight)
  • - The Lie Derivative for Measuring Learned Equivariance (ICLR 2023, Oral)
  • - On Feature Learning in the Presence of Spurious Correlations (NeurIPS 2022)
  • - Accelerating Bayesian Optimization for Biological Sequence Design with Denoising Autoencoders (ICML 2022, short talk)
  • - Deconstructing the Inductive Biases of Hamiltonian Neural Networks (ICLR 2022, Spotlight)
  • - Effective Surrogate Models for Protein Design with Bayesian Optimization (ICML Workshop on Computational Biology 2021)
  • - Epistemic Uncertainty in Learning Chaotic Dynamical Systems (ICML Uncertainty in Deep Learning Workshop 2021)
  • - Disagreement-Regularized Imitation of Complex Multi-Agent Interactions (NeuRIPs, Machine Learning for Autonomous Driving Workshop 2020)
  • - Multi-agent Adversarial Inverse Reinforcement Learning with Latent Variables (AAMAS 2020)
  • - Online Stochastic Planning for Multimodal Sensing and Navigation under Uncertainty (ICAPS 2020)
Research Experience
  • Internships at FAIR (generative modeling of crystals and proteins), Waymo (driver behavior modeling), and Google Cloud (applying ML to kernel virtual machines).
Education
  • PhD from NYU Courant, advised by Andrew Gordon Wilson and working closely with Kyunghyun Cho. BS/MS in computer science from Stanford University, where he worked with Stefano Ermon, Mykel Kochenderfer, and Chris Piech.
Background
  • Machine learning researcher focusing on generative modeling and scientific discovery.
Miscellany
  • Email / Twitter / Google Scholar