Robert Joseph George
Scholar

Robert Joseph George

Google Scholar ID: 5P1Uwy4AAAAJ
California Institute of Technology
Machine LearningAI4ScienceAI4Math
Citations & Impact
All-time
Citations
95
 
H-index
4
 
i10-index
3
 
Publications
8
 
Co-authors
10
list available
Resume (English only)
Academic Achievements
  • Developed Incremental Fourier Neural Operator (iFNO) for large-scale PDEs (TMLR 2024).
  • Created CoDA-NO: Pretrained Codomain Attention Neural Operators for Multiphysics PDEs, achieving SOTA (NeurIPS 2024).
  • Contributing to an open-source Neural Operator library (under review).
  • Led LeanAgent: Lifelong Learning for Formal Theorem Proving (ICLR 2025).
  • Developed LeanProgress: First reward model predicting proof progress in Lean (under review).
  • Initiated LeanPDE: Formalizing PDEs in general Euclidean spaces toward Millennium Problems.
  • Proposed Tensor-GaLore: Memory-efficient training via gradient tensor decomposition (NeurIPS Optimization Workshop 2024).
  • Invited to the 'Algorithmic Stability: Mathematical Foundations for the Modern Era' workshop at AIM, Caltech (May 2025).
  • Gave a talk at the 'Autoformalization for the Working Mathematician' workshop at ICERM, Brown University (April 2025).
  • Attended the Simons Institute & SLMath Joint Workshop on AI for Math and TCS (March 2025).
Research Experience
  • Currently a Research Intern at Amazon’s Reinforcement Learning Team in NYC; will join as a Research Scientist in Summer 2025.
  • Former CSRMP Research Scholar at Google AI and Data Science Intern at Microsoft Research.
  • Collaborated with Professors Martha White and Adam White (Google DeepMind) in the RLAI Lab led by Rich Sutton (Google DeepMind) and at Amii.
  • Worked with Professor John Bowman in the Mathematics Department at the University of Alberta.
  • Conducted research on numerical algorithms affiliated with PIMS and AMI.
  • Co-led the ML Theory Reading Group at Cohere for AI.