Kola Ayonrinde
Scholar

Kola Ayonrinde

Google Scholar ID: j40ixccAAAAJ
UK AI Safety Institute
Mechanistic InterpretabilityPhilosophy of AIActive InferenceAI Safety
Citations & Impact
All-time
Citations
73
 
H-index
4
 
i10-index
3
 
Publications
10
 
Co-authors
12
list available
Resume (English only)
Academic Achievements
  • Jan 1, 2025: Shazeer Typing
  • Dec 11, 2024: SAEBench: A Comprehensive Benchmark for Sparse Autoencoders
  • Oct 30, 2024: Standard SAEs Might Be Incoherent: A Choosing Problem & A “Concise” Solution
  • Aug 23, 2024: MDL-SAEs: Interpretability as Compression
  • Feb 11, 2024: Mamba Explained
  • Jan 14, 2024: The Impact of Mixtral
  • Jan 8, 2024: Descriptive Matrix Operations with Einops
  • Nov 3, 2023: Dictionary Learning with Sparse AutoEncoders
  • Oct 22, 2023: An Analogy for Understanding Mixture of Expert Models
  • Oct 20, 2023: From Sparse To Soft Mixtures of Experts
  • Jul 14, 2023: DeepSpeed's Bag of Tricks for Speed & Scale
Research Experience
  • Focused on research involving sparse autoencoders, including but not limited to interpretability through compression techniques and understanding mixture of expert models.
Background
  • Research Scientist/ML Engineer