AgoraResearch hub
ExploreLibraryProfile
Account
Constantin Venhoff
Scholar

Constantin Venhoff

Google Scholar ID: kzHpG-EAAAAJ
University of Oxford
AIMLInterpretabilityMechanistic InterpretabilityAI Alignment
Google Scholar↗
Citations & Impact
All-time
Citations
28
 
H-index
2
 
i10-index
1
 
Publications
8
 
Co-authors
9
list available
Contact
No contact links provided.
Publications
9 items
Towards Understanding Multimodal Fine-Tuning: Spatial Features
2026
Cited
0
Sparse CLIP: Co-Optimizing Interpretability and Performance in Contrastive Learning
2026
Cited
0
Too Late to Recall: Explaining the Two-Hop Problem in Multimodal Knowledge Retrieval
2025
Cited
0
Base Models Know How to Reason, Thinking Models Learn When
2025
Cited
0
Towards Mechanistic Defenses Against Typographic Attacks in CLIP
2025
Cited
0
Reasoning-Finetuning Repurposes Latent Representations in Base Models
2025
Cited
0
Understanding Reasoning in Thinking Language Models via Steering Vectors
2025
Cited
0
How Visual Representations Map to Language Feature Space in Multimodal LLMs
2025
Cited
0
Resume (English only)
Co-authors
9 total
Philip Torr
Philip Torr
Professor, University of Oxford
Neel Nanda
Neel Nanda
Mechanistic Interpretability Team Lead, Google DeepMind
Ashkan Khakzar
Ashkan Khakzar
University of Oxford
Co-author 4
Co-author 4
Bernhard Rumpe
Bernhard Rumpe
RWTH Aachen University
Christian Schroeder de Witt
Christian Schroeder de Witt
University of Oxford
Iván Arcuschin
Iván Arcuschin
Independent Researcher
Arthur Conmy
Arthur Conmy
Google DeepMind

Welcome back

Sign in to Agora

Welcome back! Please sign in to continue.

Do not have an account?