Published 'Bridging Fairness and Explainability: Can Input-Based Explanations Promote Fairness in Hate Speech Detection?' as a preprint in September 2025; Published 'B-cos LM: Efficiently Transforming Pre-trained Language Models for Improved Explainability.' as a preprint in February 2025; Conducted a full-day workshop on research data management at the RTG in February 2025.
Research Experience
Postdoc researcher at the University of Saarland, focusing on the Neuroexplicit Models. Involved in preprint publications related to fairness and explainability in NLP.
Education
Information not provided
Background
Currently a postdoc at the research training group Neuroexplicit Models at the University of Saarland (Germany). Research focuses on efficient model training in natural language processing, particularly using active learning methods to handle low-resource scenarios with user-provided labels. Also interested in human language learning, often evaluating methods within the context of automated exercise generation and assessment. A big fan of user studies, having devised and conducted various evaluation studies involving citizen scientists.