Scholar
Matthew Kowal
Google Scholar ID: FCg8QxUAAAAJ
FAR AI, York University, Vector Institute
Machine Learning
Interpretability
Computer Vision
Follow
Homepage
↗
Google Scholar
↗
Citations & Impact
All-time
Citations
358
H-index
9
i10-index
9
Publications
17
Co-authors
34
list available
Contact
Email
matt2kowal@gmail.com
CV
Open ↗
Twitter
Open ↗
GitHub
Open ↗
LinkedIn
Open ↗
Publications
9 items
Concept Influence: Leveraging Interpretability to Improve Performance and Efficiency in Training Data Attribution
2026
Cited
0
TamperBench: Systematically Stress-Testing LLM Safety Under Fine-Tuning and Tampering
2026
Cited
1
Interpreting Physics in Video World Models
2026
Cited
0
Large language models can effectively convince people to believe conspiracies
arXiv.org · 2026
Cited
0
Emergent Persuasion: Will LLMs Persuade Without Being Prompted?
2025
Cited
0
Into the Rabbit Hull: From Task-Relevant Concepts in DINO to Minkowski Geometry
2025
Cited
0
It's the Thought that Counts: Evaluating the Attempts of Frontier LLMs to Persuade on Harmful Topics
2025
Cited
0
Archetypal SAE: Adaptive and Stable Dictionary Learning for Concept Extraction in Large Vision Models
2025
Cited
0
Load more
Resume (English only)
Academic Achievements
Preprint 2025: 'Into the Rabbit Hull: From Task-Relevant Concepts in DINO to Minkowski Geometry'
Preprint 2025: 'It’s the Thought that Counts: Evaluating the Attempts of Frontier LLMs to Persuade on Harmful Topics' (introduced AttemptPersuadeEval)
ICML 2025: 'Universal Sparse Autoencoders: Interpretable Cross-Model Concept Alignment'
ICML 2025: 'Archetypal SAE: Adaptive and Stable Dictionary Learning for Concept Extraction in Large Vision Models'
CVPR 2024 (Spotlight): 'Visual Concept Connectome (VCC): Open World Concept Discovery and their Interlayer Connections in Deep Models'
CVPR 2024 (Spotlight): 'Understanding Video Transformers via Universal Concept Discovery'
CVPR 2022 & XAI4CV Workshop (Spotlight): 'A Deeper Dive Into What Deep Spatiotemporal Networks Encode'
ICCV 2021: 'Global Pooling, More than Meets the Eye: Position Information is Encoded Channel-Wise in CNNs'
ICLR 2021: 'Shape or Texture: Understanding Discriminative Features in CNNs'
BMVC 2020 (Oral): 'Feature Bin'
Research Experience
Member of Technical Staff at FAR AI, working on AI safety, mechanistic interpretability, and LLM persuasion
Intern at Ubisoft La Forge, working on generative modeling for character animations
Intern at Toyota Research Institute, contributing to interpretability research for video transformers
Faculty affiliate researcher at Vector Institute
Lead Scientist in Residence at NextAI (2020–2022)
Worked at Morrison Hershfield post-Bachelor’s, designing buildings, labs, and residential projects
Co-authors
34 total
Konstantinos G. Derpanis
York University and Samsung AI Centre Toronto
Md Amirul Islam
Center for Advanced AI, Accenture
Neil Bruce
School of Computer Science, University of Guelph
Co-author 4
Björn Ommer
Professor, Computer Vision & Learning Group (CompVis), University of Munich
Co-author 6
Thomas Fel
Kempner Fellow, Harvard University
Mennatullah Siam
University of British Columbia
×
Welcome back
Sign in to Agora
Welcome back! Please sign in to continue.
Email address
Password
Forgot password?
Continue
Do not have an account?
Sign up