Scholar
Evgenii Kortukov
Google Scholar ID: 7qTZ4NEAAAAJ
Explainable AI group, Fraunhofer Heinrich Hertz Institute
Machine Learning
Interpretability
AI Safety
Follow
Homepage
↗
Google Scholar
↗
Citations & Impact
All-time
Citations
33
H-index
3
i10-index
1
Publications
7
Co-authors
14
list available
Contact
Email
ee{last_name}gmail.com
CV
Open ↗
Twitter
Open ↗
GitHub
Open ↗
LinkedIn
Open ↗
Publications
5 items
Concept-based explanations of Segmentation and Detection models in Natural Disaster Management
2026
Cited
0
A Behavioural and Representational Evaluation of Goal-Directedness in Language Model Agents
2026
Cited
0
Strategic Dishonesty Can Undermine AI Safety Evaluations of Frontier LLM
2025
Cited
0
ASIDE: Architectural Separation of Instructions and Data in Language Models
2025
Cited
0
Towards User-Focused Research in Training Data Attribution for Human-Centered Explainable AI
arXiv.org · 2024
Cited
2
Resume (English only)
Co-authors
14 total
Elisa Nguyen
University of Tübingen
Seong Joon Oh
University of Tübingen
Alexander Rubinstein
PhD Student at the University of Tübingen
Jean Y. Song
Assistant Professor, DGIST
Setareh Maghsudi
Ruhr-University Bochum
Saeed Ghoorchian
SAP AI Research
Wojciech Samek
Professor at TU Berlin, Head of AI Department at Fraunhofer HHI, BIFOLD Fellow
Sebastian Lapuschkin
Head of Explainable AI, Fraunhofer Heinrich Hertz Institute
×
Welcome back
Sign in to Agora
Welcome back! Please sign in to continue.
Email address
Password
Forgot password?
Continue
Do not have an account?
Sign up