2025: Two papers published in TMLR, 'Controlled Training Data Generation with Diffusion Models' and 'An Analysis of Model Robustness across Concurrent Distribution Shifts'; 2024: Paper published in ECCV, 'ViPer: Visual Personalization of Generative Models via Individual Preference Learning'; 2023: Paper published in NeurIPS, '4M: Massively Multimodal Masked Modelling' and paper in ICCV, 'Rapid Network Adaptation: Learning to Adapt Neural Networks Using Test-Time Feedback'; 2022: Paper in NeurIPS, 'Task Discovery: Finding the Tasks that Neural Networks Generalize on' and oral presentation in CVPR, '3D Common Corruptions and Data Augmentation'; 2021: Oral presentation in ICCV, 'Robustness via Cross-domain Ensembles'; 2020: Oral presentation in CVPR, 'Robust Learning Through Cross-Task Consistency'; 2019: Oral presentation in AAAI, 'Iterative Classroom Teaching'.
Research Experience
2018-2023: Teaching Assistant at EPFL, taught courses including CS503 Visual Intelligence, EE559 Deep Learning, CS433 Machine Learning; 2016-2017: Data Scientist at Shift Technology, designed and implemented models for automated fraud detection; 2013-2015: Quantitative Researcher at UBS, researched on systematic strategies for equity portfolios.
Education
2017-2024: EPFL, PhD in Computer Science, Advisor: Amir Zamir, Thesis: Making Computer Vision Models Robust and Adaptive; 2015-2016: University of Cambridge, M.Phil. in Machine Learning and Machine Intelligence, Thesis: Bayesian optimization for natural language processing.
Background
Research interests: intersection of computer vision and machine learning, particularly making models more robust and adaptive. Professional area: neurosymbolic methods with applications ranging from maths to embodied AI.