Published multiple papers including 'Examining Gender and Racial Bias in Large Vision-Language Models Using a Novel Dataset of Parallel Images' (Fraser & Kiritchenko, 2024), 'Aporophobia: An Overlooked Type of Toxic Language Targeting the Poor' (Kiritchenko, Curto, Nejadgholi, & Fraser, 2023), and more; involved in several research projects, such as algorithms to automatically detect speech and language markers of cognitive or mental state, examining stereotyping and bias in social media text and in machine learning models, evaluating frontier AI models and fine-tuned models for safety risks, and designing mitigations.
Research Experience
Currently an Associate Professor in the School of Electrical Engineering and Computer Science at the University of Ottawa; previously a Research Officer in the Text Analytics group at the National Research Council, researching language technologies for healthcare and social good. Her recent research has focused on social and ethical issues in natural language processing, such as identifying stereotypes and implicitly abusive language in social media text, as well as improving the interpretability and transparency of machine learning models.
Education
Completed a postdoc at the University of Gothenburg, Sweden in 2018, where she worked on detecting mild cognitive impairment from speech patterns and eye movements.
Background
Research interests include natural language processing (NLP) and artificial intelligence (AI), particularly issues related to ethical AI and AI safety, such as bias, fairness, social stereotypes, deception, manipulation, and model evaluation. A large part of her research also focuses on NLP for healthcare, especially the detection of early signs of cognitive impairment from speech and language features.