Paper 'Beyond Technocratic XAI: The Who, What & How in Explanation Design', AIES 2025 (Main Conference)
Paper 'Mechanistic Interpretability Needs Philosophy', under review
Paper 'Evaluating Multimodal Language Models as Visual Assistants for Visually Impaired Users', ACL 2025 (Main Conference)
Paper 'Investigating the Role of Modality and Training Objective on Representational Alignment Between Transformers and the Brain', NeurIPS 2024 (UniReps Workshop)
Paper 'Defining Knowledge: Bridging Epistemology and Large Language Models', EMNLP 2024 (Main Conference)
Presentation 'From Words to Worlds: Compositionality for Cognitive Architectures', LLMs & Cognition Workshop @ ICML 2024
Presentation 'The Completeness Problem: Beyond Human Metrics in Assessing Abilities of Cognitive Systems', ACAIN 2024
Presentation 'Compositionality in Language Models: A Perspective in Changing Interpretations and Methods', AIAI 2024
Background
PhD Researcher in Natural Language Processing and AI
Research focuses on evaluation of Large Language Models (LLMs): How to determine if a model is 'good' or 'safe', whether current evaluation practices are rigorous enough, and how to make model strengths and limitations more interpretable and actionable for the public
Interested in AI Policy and Governance, especially the role of evaluation practices
Affiliated with the CoAStaL Lab, Department of Computer Science, University of Copenhagen, supervised by Dr. Anders Søgaard
Also involved with the Centre of Philosophy of AI, exploring societal and ethical dimensions of AI