Scholar
Nolan Dey
Google Scholar ID: JHUfMr0AAAAJ
Cerebras Systems
Large language models
Training efficiency
Sparsity
Explainable AI
Follow
Homepage
↗
Google Scholar
↗
Citations & Impact
All-time
Citations
449
H-index
6
i10-index
5
Publications
16
Co-authors
6
list available
Contact
No contact links provided.
Publications
6 items
Predicting Training Re-evaluation Curves Enables Effective Data Curriculums for LLMs
2025
Cited
0
Scaling with Collapse: Efficient and Predictable Training of LLM Families
2025
Cited
0
Power Lines: Scaling Laws for Weight Decay and Batch Size in LLM Pre-training
2025
Cited
0
Don't be lazy: CompleteP enables compute-efficient deep transformers
2025
Cited
0
Straight to Zero: Why Linearly Decaying the Learning Rate to Zero Works Best for LLMs
2025
Cited
1
Neuron-based explanations of neural networks sacrifice completeness and interpretability
2020
Cited
0
Resume (English only)
Co-authors
6 total
Joel Hestness
Distinguished Research Scientist, Cerebras Systems
Daria Soboleva
Cerebras Systems
Shane Bergsma
Cerebras Systems
Co-author 4
Graham Taylor
University of Guelph and Vector Institute for Artificial Intelligence
Alexander Wong
Canada Research Chair FIET FInstP FRSPH FRSM FRGS FGS FRSA FISDDE, University of Waterloo
×
Welcome back
Sign in to Agora
Welcome back! Please sign in to continue.
Email address
Password
Forgot password?
Continue
Do not have an account?
Sign up