Paper: Robustness in Both Domains: CLIP Needs a Robust Text Encoder, NeurIPS 2025
Paper: Certified Robustness Under Bounded Levenshtein Distance, ICLR 2025
Paper: Membership Inference Attacks against Large Vision-Language Models, NeurIPS 2024
Paper: Revisiting Character-level Adversarial Attacks for Language Models, ICML 2024
Paper: Efficient Local Linearity Regularization to Overcome Catastrophic Overfitting, ICLR 2024
Paper: Sound and Complete Verification of Polynomial Networks, NeurIPS 2022
Talk: Revisiting Character-level Adversarial Attacks for Language Models, La Salle University (Barcelona), DLBCN 2024
Research Experience
Currently a PhD student at EPFL and a Student Researcher at an institution.
Background
Research interests include robustness in Natural Language Processing, specifically Adversarial Attacks, Adversarial Training, and Neural Network Verification methods.