Working at L3S Research Center, focusing on interpretability for NLP models.
Education
PhD Student - Leibniz University, Advisor unknown, Major in Interpretable AI, Time unknown.
Background
Currently working to obtain my PhD in Interpretable AI from Leibniz University in Hannover, Germany. My research focuses on Interpretability in socio-technical systems. My main goal is understanding how information is processed and making sure the LLMs do so reliably. Current interests include: Mechanistic interpretability, QA & IR, Robustness of Deep Learning models.