SUB: Benchmarking CBM Generalization via Synthetic Attribute Substitutions, ICCV 2025
Align-then-Unlearn: Embedding Alignment for LLM Unlearning, ICML 2025 Workshop on Machine Unlearning for Generative AI
Revealing and Reducing Gender Biases in Vision and Language Assistants (VLAs), ICLR 2025
Decoupling Angles and Strength in Low-rank Adaptation, ICLR 2025
Addressing caveats of neural persistence with deep graph persistence, TMLR (11/2023)
Research Experience
PhD Researcher.
Education
Bachelor's degree: Computational Linguistics from Heidelberg University, 2021; Master's degree: Computational Linguistics from the University of Tübingen, 2023; Supervisor: Prof. Zeynep Akata.
Background
Primary interests: Gaining a better understanding of deep learning models, including how neural networks work and constructing more interpretable models.