- Inter-animal transforms as a guide to model-brain comparison
- Recurrent Connections in the Primate Ventral Visual Stream Mediate a Trade-Off Between Task Performance and Network Size During Core Object Recognition
- Limiting Dynamics of SGD: Modified Loss, Phase Space Oscillations, and Anomalous Diffusion
- Neural Mechanics: Symmetry and Broken Conservation Laws in Deep Learning Dynamics
- Two Routes to Scalable Credit Assignment without Weight Symmetry
Research Experience
Before joining the Stanford Neuroscience and Artificial Intelligence Laboratory, spent two years working at a Mexican FinTech startup, teaching Computer Science, and doing research in machine learning for text mining. Currently, working with P.I. Dan Yamins on projects involving recurrent models, biologically-inspired learning rules, and deep learning theory.
Education
- PhD in Computational and Mathematical Engineering, 2023, Stanford University
- MSc in Computational and Mathematical Engineering, 2019, Stanford University
- BSc in Computer Engineering, 2015, Instituto Tecnológico Autónomo de México
- BSc in Applied Mathematics, 2015, Instituto Tecnológico Autónomo de México
Background
PhD Candidate at the Institute for Computational and Mathematical Engineering at Stanford University. Widely interested in understanding how the human brain works using computational models, specifically on recurrent models of the visual system, biologically-inspired learning rules, and deep learning theory. Recently, working on measuring similarity between individuals' neural responses and mapping between them. Also interested in the application and safe deployment of AI systems in the real world.
Miscellany
Non-academic interests include alpine skiing, cycling, hiking, cooking, and an ever-increasing obsession with coffee.