Preprint on the Visual Iconicity Challenge accepted at arXiv; Paper on Semantics-Aware Co-Speech Gesture Generation accepted at ICCV.
Research Experience
Developed explainable multimodal emotion-recognition techniques during my PhD and post-doc at Maastricht University; Investigated linguistic–gestural alignment and automatic gesture segmentation in dialogues at the Institute for Logic, Language & Computation (University of Amsterdam). Participated in two EU-funded studies (200+ participants) and a work package that combined clinicians’ expertise with machine intelligence for socio-economic contexts.
Education
During my PhD and post-doc at Maastricht University, I developed explainable multimodal emotion-recognition techniques; at the Institute for Logic, Language & Computation (University of Amsterdam) I investigated linguistic–gestural alignment and automatic gesture segmentation in dialogues.
Background
I study and model how verbal and non-verbal cues work together in a variety of human behaviors. Since September 2024, I have been a researcher in the Multimodal Language Department at the Max Planck Institute for Psycholinguistics, where I model multimodal communication for both human insight and machine applications. Trained in computer science and engineering, I work across AI, cognitive science, psycholinguistics, psychology, and healthcare to computationally model human behavior for both fundamental and applied research. My work focuses on multimodal interaction, particularly in the context of dialogue.