Scholar
Constantin Venhoff
Google Scholar ID: kzHpG-EAAAAJ
University of Oxford
AI
ML
Interpretability
Mechanistic Interpretability
AI Alignment
Follow
Google Scholar
↗
Citations & Impact
All-time
Citations
28
H-index
2
i10-index
1
Publications
8
Co-authors
9
list available
Contact
No contact links provided.
Publications
9 items
Towards Understanding Multimodal Fine-Tuning: Spatial Features
2026
Cited
0
Sparse CLIP: Co-Optimizing Interpretability and Performance in Contrastive Learning
2026
Cited
0
Too Late to Recall: Explaining the Two-Hop Problem in Multimodal Knowledge Retrieval
2025
Cited
0
Base Models Know How to Reason, Thinking Models Learn When
2025
Cited
0
Towards Mechanistic Defenses Against Typographic Attacks in CLIP
2025
Cited
0
Reasoning-Finetuning Repurposes Latent Representations in Base Models
2025
Cited
0
Understanding Reasoning in Thinking Language Models via Steering Vectors
2025
Cited
0
How Visual Representations Map to Language Feature Space in Multimodal LLMs
2025
Cited
0
Load more
Resume (English only)
Co-authors
9 total
Philip Torr
Professor, University of Oxford
Neel Nanda
Mechanistic Interpretability Team Lead, Google DeepMind
Ashkan Khakzar
University of Oxford
Co-author 4
Bernhard Rumpe
RWTH Aachen University
Christian Schroeder de Witt
University of Oxford
Iván Arcuschin
Independent Researcher
Arthur Conmy
Google DeepMind
×
Welcome back
Sign in to Agora
Welcome back! Please sign in to continue.
Email address
Password
Forgot password?
Continue
Do not have an account?
Sign up