Scholar
Maksym Andriushchenko
Google Scholar ID: ZNtuJYoAAAAJ
ELLIS Institute Tübingen & Max Planck Institute for Intelligent Systems
AI Safety
AI Alignment
LLMs
LLM agents
Follow
Homepage
↗
Google Scholar
↗
Citations & Impact
All-time
Citations
7,191
H-index
24
i10-index
27
Publications
20
Co-authors
17
list available
Contact
Email
maksym@andriushchenko.me
CV
Open ↗
Twitter
Open ↗
GitHub
Open ↗
LinkedIn
Open ↗
Publications
19 items
Claudini: Autoresearch Discovers State-of-the-Art Adversarial Attack Algorithms for LLMs
2026
Cited
0
PostTrainBench: Can LLM Agents Automate LLM Post-Training?
2026
Cited
0
International AI Safety Report 2026
2026
Cited
6
Skill-Inject: Measuring Agent Vulnerability to Skill File Attacks
2026
Cited
0
Helpful to a Fault: Measuring Illicit Assistance in Multi-Turn, Multilingual LLM Agents
2026
Cited
0
HalluHard: A Hard Multi-Turn Hallucination Benchmark
2026
Cited
0
International AI Safety Report 2025: Second Key Update: Technical Safeguards and Risk Management
2025
Cited
0
Agent Skills Enable a New Class of Realistic and Trivially Simple Prompt Injections
2025
Cited
0
Load more
Resume (English only)
Co-authors
17 total
Nicolas Flammarion
EPFL
Matthias Hein
Professor of Computer Science, University of Tübingen
Francesco Croce
EPFL
Edoardo Debenedetti
ETH Zurich
Matt Fredrikson
Carnegie Mellon University
Marius Mosbach
Mila - Quebec AI Institute, McGill University
Zico Kolter
Carnegie Mellon University
Andy Zou
PhD Student, Carnegie Mellon University
×
Welcome back
Sign in to Agora
Welcome back! Please sign in to continue.
Email address
Password
Forgot password?
Continue
Do not have an account?
Sign up