Published several papers, such as 'Language Models as Knowledge Bases' (EACL 2021), 'Sequence Tagging with Contextual and Non-Contextual Subword Representations: A Multilingual Evaluation' (ACL 2019), etc.
Research Experience
Research Scientist at RIKEN and Tohoku University; Previously worked with the language understanding team at RIKEN; Research focuses on the intersection of knowledge bases and language models.
Education
PhD in Computational Linguistics, 2019, Heidelberg University; Magister in Computational Linguistics, 2015, Heidelberg University
Background
Research interests include knowledge representation in language models, interpretability and internal representations of language models, information extraction, tokenization, and multilingual subword methods. Aiming to understand if/how language models understand language.