Scholar
Johnny Lin
Google Scholar ID: O1u14pcAAAAJ
Neuronpedia
artificial intelligence
mechanistic interpretability
explainable ai
Follow
Homepage
↗
Google Scholar
↗
Citations & Impact
All-time
Citations
93
H-index
7
i10-index
5
Publications
7
Co-authors
0
Contact
Twitter
Open ↗
GitHub
Open ↗
Publications
2 items
Priors in Time: Missing Inductive Biases for Language Model Interpretability
2025
Cited
0
SAEBench: A Comprehensive Benchmark for Sparse Autoencoders in Language Model Interpretability
2025
Cited
2
Resume (English only)
Academic Achievements
- Published work on Sparse Autoencoders, Feature Splitting, and more.
- Developed tools such as Gemma Scope and Llama Scope for exploring the inner workings of language models.
Research Experience
- Collaborated on research between Anthropic, EleutherAI, Goodfire AI, Google DeepMind, and Decode.
- Contributed to the development of tools like Circuit Tracer.
Background
Involved in multiple research projects related to AI model interpretability.
Co-authors
0 total
Co-authors: 0 (list not available)
×
Welcome back
Sign in to Agora
Welcome back! Please sign in to continue.
Email address
Password
Forgot password?
Continue
Do not have an account?
Sign up