- 'On the difficulty of constructing a robust and publicly-detectable watermark' in AISTATS 2025.
- 'LiNeS: Post-training layer scaling prevents forgetting and enhances model merging' in ICLR 2024.
- 'Localizing Task Information for Improved Model Merging and Compression' in ICML 2024.
- 'Task Arithmetic in the Tangent Space: Improved Editing of Pre-Trained Models' in NeurIPS 2023.
- 'What can linearized neural networks actually say about generalization?' in NeurIPS 2021.
Research Experience
Senior research scientist at Google DeepMind; interned at Google Research in Zürich; visited Philip Torr’s lab at the University of Oxford, UK.
Education
PhD from EPFL, Switzerland, under the supervision of Pascal Frossard; MSc from TU Delft, Netherlands; BSc from Universidad Politécnica de Madrid, Spain.
Background
Senior research scientist, mainly focusing on responsible AI, developing technologies to cultivate a healthier data ecosystem (such as SynthID) and improve GenAI models. Interested in the science of deep learning, specifically understanding how and why neural networks learn.
Miscellany
Personal interests include the science of deep learning.