Scholar
Yan Scholten
Google Scholar ID: 8G2bJ7sAAAAJ
Technical University of Munich
Machine Learning
Follow
Homepage
↗
Google Scholar
↗
Citations & Impact
All-time
Citations
212
H-index
6
i10-index
5
Publications
12
Co-authors
15
list available
Contact
CV
Open ↗
GitHub
Open ↗
Publications
6 items
Tail-aware Adversarial Attacks: A Distributional Approach to Efficient LLM Jailbreaking
2025
Cited
0
Model Collapse Is Not a Bug but a Feature in Machine Unlearning for LLMs
2025
Cited
0
Adversarial Alignment for LLMs Requires Simpler, Reproducible, and More Measurable Objectives
2025
Cited
0
Provably Reliable Conformal Prediction Sets in the Presence of Data Poisoning
arXiv.org · 2024
Cited
1
A Probabilistic Perspective on Unlearning and Alignment for Large Language Models
arXiv.org · 2024
Cited
2
Assessing Robustness via Score-Based Adversarial Image Generation
arXiv.org · 2023
Cited
4
Resume (English only)
Background
PhD student in the Data Analytics and Machine Learning group at Technical University of Munich (TUM)
Research focuses on trustworthy AI
Aims to make machine learning safer, more reliable, and better aligned with human values
Core research areas include machine unlearning, alignment, adversarial robustness, robustness certification, and conformal prediction
Recent work advances the capabilities and reliability of large language models (LLMs)
Co-authors
15 total
Stephan Günnemann
Professor of Computer Science, Technical University of Munich
Stefan Heindorf
Paderborn University
Martin Potthast
University of Kassel, hessian.AI, and ScaDS.AI
Leo Schwinn
Technical University of Munich
Jan Schuchardt
Morgan Stanley Machine Learning Research
Axel-Cyrille Ngonga Ngomo
Professor of Data Science at Paderborn University, Heinz Nixdorf Institute
Henning Wachsmuth
Leibniz University Hannover, L3S Research Center
Aleksandar Bojchevski
University of Cologne
×
Welcome back
Sign in to Agora
Welcome back! Please sign in to continue.
Email address
Password
Forgot password?
Continue
Do not have an account?
Sign up