Matthew Kowal
Scholar

Matthew Kowal

Google Scholar ID: FCg8QxUAAAAJ
FAR AI, York University, Vector Institute
Machine LearningInterpretabilityComputer Vision
Citations & Impact
All-time
Citations
358
 
H-index
9
 
i10-index
9
 
Publications
17
 
Co-authors
34
list available
Resume (English only)
Academic Achievements
  • Preprint 2025: 'Into the Rabbit Hull: From Task-Relevant Concepts in DINO to Minkowski Geometry'
  • Preprint 2025: 'It’s the Thought that Counts: Evaluating the Attempts of Frontier LLMs to Persuade on Harmful Topics' (introduced AttemptPersuadeEval)
  • ICML 2025: 'Universal Sparse Autoencoders: Interpretable Cross-Model Concept Alignment'
  • ICML 2025: 'Archetypal SAE: Adaptive and Stable Dictionary Learning for Concept Extraction in Large Vision Models'
  • CVPR 2024 (Spotlight): 'Visual Concept Connectome (VCC): Open World Concept Discovery and their Interlayer Connections in Deep Models'
  • CVPR 2024 (Spotlight): 'Understanding Video Transformers via Universal Concept Discovery'
  • CVPR 2022 & XAI4CV Workshop (Spotlight): 'A Deeper Dive Into What Deep Spatiotemporal Networks Encode'
  • ICCV 2021: 'Global Pooling, More than Meets the Eye: Position Information is Encoded Channel-Wise in CNNs'
  • ICLR 2021: 'Shape or Texture: Understanding Discriminative Features in CNNs'
  • BMVC 2020 (Oral): 'Feature Bin'
Research Experience
  • Member of Technical Staff at FAR AI, working on AI safety, mechanistic interpretability, and LLM persuasion
  • Intern at Ubisoft La Forge, working on generative modeling for character animations
  • Intern at Toyota Research Institute, contributing to interpretability research for video transformers
  • Faculty affiliate researcher at Vector Institute
  • Lead Scientist in Residence at NextAI (2020–2022)
  • Worked at Morrison Hershfield post-Bachelor’s, designing buildings, labs, and residential projects