1. When Bias Backfires: The Modulatory Role of Counterfactual Explanations on the Adoption of Algorithmic Bias in XAI-Supported Human Decision-Making
2. Large Language Models Do Not Simulate Human Psychology
3. Generation Gap or Diffusion Trap? How Age Affects the Detection of Personalized AI-Generated Images
4. The role of user feedback in enhancing understanding and trust in counterfactual explanations for explainable AI
5. Shaping Trustworthy AI: An Introduction to This Issue
6. CL-XAI: Toward Enriched Cognitive Learning with Explainable Artificial Intelligence
7. Automatic Matchmaking in Two-Versus-Two Sports
8. For Better or Worse: The Impact of Counterfactual Explanations’ Directionality on User Behavior in xAI
9. Let's go to the Alien Zoo: Introducing an experimental framework to study usability of counterfactual explanations for machine learning
Research Experience
She is affiliated with the Machine Learning Group at Bielefeld University, led by Barbara Hammer, and the CoR-Lab Research Institute for Cognition and Robotics. Additionally, she serves as the scientific coordinator of the KI-Akademie OWL.
Education
No specific educational background information provided.
Background
Research Interests: Explainable AI (xAI), Human-Computer Interaction, and Interdisciplinary Collaboration. As a postdoctoral researcher, she aims to provide empirical evidence to inform the design, development, and deployment of explainable artificial intelligence systems that effectively meet user needs and enhance user trust. Specifically, her focus is on the usability of counterfactual explanations, simplified 'what-if' scenarios that explore how changes to input variables trigger different model outcomes.
Miscellany
No additional personal information such as interests or hobbies provided.