Scholar
Robert Joseph George
Google Scholar ID: 5P1Uwy4AAAAJ
California Institute of Technology
Machine Learning
AI4Science
AI4Math
Follow
Homepage
↗
Google Scholar
↗
Citations & Impact
All-time
Citations
95
H-index
4
i10-index
3
Publications
8
Co-authors
10
list available
Contact
CV
Open ↗
Twitter
Open ↗
GitHub
Open ↗
LinkedIn
Open ↗
Publications
7 items
TorchLean: Formalizing Neural Networks in Lean
2026
Cited
0
QEDBENCH: Quantifying the Alignment Gap in Automated Evaluation of University-Level Mathematical Proofs
2026
Cited
0
BRIDGE: Building Representations In Domain Guided Program Verification
2025
Cited
0
LeanProgress: Guiding Search for Neural Theorem Proving via Proof Progress Prediction
2025
Cited
0
Tensor-GaLore: Memory-Efficient Training via Gradient Tensor Decomposition
2025
Cited
0
A Library for Learning Neural Operators
arXiv.org · 2024
Cited
2
LeanAgent: Lifelong Learning for Formal Theorem Proving
arXiv.org · 2024
Cited
3
Resume (English only)
Academic Achievements
Developed Incremental Fourier Neural Operator (iFNO) for large-scale PDEs (TMLR 2024).
Created CoDA-NO: Pretrained Codomain Attention Neural Operators for Multiphysics PDEs, achieving SOTA (NeurIPS 2024).
Contributing to an open-source Neural Operator library (under review).
Led LeanAgent: Lifelong Learning for Formal Theorem Proving (ICLR 2025).
Developed LeanProgress: First reward model predicting proof progress in Lean (under review).
Initiated LeanPDE: Formalizing PDEs in general Euclidean spaces toward Millennium Problems.
Proposed Tensor-GaLore: Memory-efficient training via gradient tensor decomposition (NeurIPS Optimization Workshop 2024).
Invited to the 'Algorithmic Stability: Mathematical Foundations for the Modern Era' workshop at AIM, Caltech (May 2025).
Gave a talk at the 'Autoformalization for the Working Mathematician' workshop at ICERM, Brown University (April 2025).
Attended the Simons Institute & SLMath Joint Workshop on AI for Math and TCS (March 2025).
Research Experience
Currently a Research Intern at Amazon’s Reinforcement Learning Team in NYC; will join as a Research Scientist in Summer 2025.
Former CSRMP Research Scholar at Google AI and Data Science Intern at Microsoft Research.
Collaborated with Professors Martha White and Adam White (Google DeepMind) in the RLAI Lab led by Rich Sutton (Google DeepMind) and at Amii.
Worked with Professor John Bowman in the Mathematics Department at the University of Alberta.
Conducted research on numerical algorithms affiliated with PIMS and AMI.
Co-led the ML Theory Reading Group at Cohere for AI.
Co-authors
10 total
Anima Anandkumar
California Institute of Technology and NVIDIA
Zongyi Li
MIT
Jean Kossaifi
Senior Research Scientist at NVIDIA
Jiawei Zhao
Meta FAIR
Boris Bonev
NVIDIA Research
Julius Berner
NVIDIA
Kamyar Azizzadenesheli
Nvidia
Daniel V Leibovici
Research Scientist, NVIDIA Research
×
Welcome back
Sign in to Agora
Welcome back! Please sign in to continue.
Email address
Password
Forgot password?
Continue
Do not have an account?
Sign up