Pay Attention to the Triggers: Constructing Backdoors That Survive Distillation
Fewer Weights, More Problems: A Practical Attack on LLM Pruning
BaxBench: Can LLMs Generate Secure and Correct Backends?
Black-Box Adversarial Attacks on LLM-Based Code Completion
Mind the Gap: A Practical Attack on GGUF Quantization
Finetuning-Activated Backdoors in LLMs
Large Language Models are Advanced Anonymizers
2024 Publications:
A Synthetic Dataset for Personal Attribute Inference
Private Attribute Inference from Images with Vision-Language Models
Exploiting LLM Quantization
COMPL-AI Framework: A Technical Interpretation and LLM Benchmarking Suite for the EU Artificial Intelligence Act
CuTS: Customizable Tabular Synthetic Data Generation
Instruction Tuning for Secure Code Generation
Beyond Memorization: Violating Privacy Via Inference with Large Language Models
Back to the Drawing Board for Fair Representation Learning
2023 Publications:
TabLeak: Tabular Data Leakage in Federated Learning
Research Experience
Teaching Experience:
Analysis 1, Autumn 2019, Autumn 2022
Analysis 2, Spring 2020, Spring 2022
Analysis 3, Autumn 2020
Introduction to Electrical Engineering, Spring 2019
Engineering Mechanics, Autumn 2018
Background
PhD student at the Department of Computer Science, ETH Zürich. Part of the Secure, Reliable, and Intelligent Systems Lab, supervised by Martin Vechev since July 2023.