Transactions of the Association for Computational Linguistics · 2023
Cited
2
Resume (English only)
Academic Achievements
Co-authored the 2020 SIGDial best paper 'TripPy'.
Paper 'Less is More: Local Intrinsic Dimensions of Contextual Language Models' accepted at NeurIPS 2025.
Released preprint 'Post-Training Large Language Models via Reinforcement Learning from Self-Feedback' (July 2025), introducing RLSF—a novel self-feedback reinforcement learning approach.
Paper 'A Confidence-based Acquisition Model for Self-supervised Active Learning and Label Correction' published in TACL and to be presented at ACL 2025.
Published at top venues including EMNLP, ACL, and IEEE, with a focus on generative modeling and trustworthy AI.
Background
Currently a Postdoctoral Researcher in the Dialogue Systems and Machine Learning group at Heinrich Heine University, Düsseldorf, under Prof. Milica Gašić.
Research interests focus on understanding uncertainty in Large Language Models (LLMs), generative AI, trustworthy AI, and reinforcement learning for LLM fine-tuning.
Has 10 years of combined academic and industry experience in machine learning, deep learning, and natural language processing.
Previously worked as an AI consultant, applying state-of-the-art AI methods to real-world problems.
Aims to make dialogue systems smarter and more reliable through deeper insights into LLMs.