Arthur Jacot
Scholar

Arthur Jacot

Google Scholar ID: G6OhFawAAAAJ
Assistant Professor, Courant Institute of Mathematical Sciences, NYU
Deep Learning
Citations & Impact
All-time
Citations
5,478
 
H-index
14
 
i10-index
14
 
Publications
20
 
Co-authors
13
list available
Resume (English only)
Academic Achievements
  • Published multiple papers at top-tier conferences including ICLR, NeurIPS, and ICML
  • 2025: 'Wide Neural Networks Trained with Weight Decay Provably Exhibit Neural Collapse' (ICLR oral)
  • 2025: 'Hamiltonian Mechanics of Feature Learning: Bottleneck Structure in Leaky ResNets' (CPAL oral)
  • 2023: 'Implicit Bias of Large Depth Networks: a Notion of Rank for Nonlinear Functions' (ICLR spotlight)
  • Research focuses on feature learning, implicit bias, geometry of loss landscapes, and training dynamics in deep neural networks
Background
  • Assistant Professor at the Courant Institute of Mathematical Sciences, NYU
  • Aims to develop new mathematical concepts and tools to describe the training dynamics of Deep Neural Networks
  • Seeks to build a Theory of Deep Learning that transforms how AI models are trained and developed
  • Currently most excited about showing that DNNs implement a computational version of Occam's razor—finding the fastest algorithm/circuit fitting the training data
  • Also interested in feature learning, emergence of low-dimensional representations under weight decay, and identifying different training regimes (e.g., NTK regime, active regime)