Mark Vero
Scholar

Mark Vero

Google Scholar ID: vguDYtQAAAAJ
Google, ETH Zurich
large language modelsprivacysecuritymachine learning
Citations & Impact
All-time
Citations
508
 
H-index
9
 
i10-index
9
 
Publications
15
 
Co-authors
25
list available
Resume (English only)
Academic Achievements
  • 2025 Publications:
  • Pay Attention to the Triggers: Constructing Backdoors That Survive Distillation
  • Fewer Weights, More Problems: A Practical Attack on LLM Pruning
  • BaxBench: Can LLMs Generate Secure and Correct Backends?
  • Black-Box Adversarial Attacks on LLM-Based Code Completion
  • Mind the Gap: A Practical Attack on GGUF Quantization
  • Finetuning-Activated Backdoors in LLMs
  • Large Language Models are Advanced Anonymizers
  • 2024 Publications:
  • A Synthetic Dataset for Personal Attribute Inference
  • Private Attribute Inference from Images with Vision-Language Models
  • Exploiting LLM Quantization
  • COMPL-AI Framework: A Technical Interpretation and LLM Benchmarking Suite for the EU Artificial Intelligence Act
  • CuTS: Customizable Tabular Synthetic Data Generation
  • Instruction Tuning for Secure Code Generation
  • Beyond Memorization: Violating Privacy Via Inference with Large Language Models
  • Back to the Drawing Board for Fair Representation Learning
  • 2023 Publications:
  • TabLeak: Tabular Data Leakage in Federated Learning
Research Experience
  • Teaching Experience:
  • Analysis 1, Autumn 2019, Autumn 2022
  • Analysis 2, Spring 2020, Spring 2022
  • Analysis 3, Autumn 2020
  • Introduction to Electrical Engineering, Spring 2019
  • Engineering Mechanics, Autumn 2018
Background
  • PhD student at the Department of Computer Science, ETH Zürich. Part of the Secure, Reliable, and Intelligent Systems Lab, supervised by Martin Vechev since July 2023.