- September 2023: Two papers accepted to IJCNLP-AACL 2023
- July 2023: Metric-based learning paper accepted to INLG23
- March 2023: Gave a talk titled “Living in the world of large language models: Successes and Failures” at KOLT Webinar
- February 2023: Two papers accepted to EACL 2023
- July 2022: Gave a lecture titled “Towards Fair NLP Models: An Overview of Recent Bias Detection and Mitigation Strategies” at Text Mining and Natural Language Processing for Computational Social Sciences Summer School
- July 2022: Paper “On the rate of convergence of a classifier based on a Transformer encoder” accepted to IEEE Transactions on Information Theory
- June 2022: Will serve as an Area Chair for the “Low-resourced and less studied languages” track in COLING 2022
- April 2022: Application to Tübitak 2236-Co-Funded Brain Circulation Scheme2 (CoCirculation2) fellowship approved (rejected due to receiving 2232b fellowship)
- March 2022: Received a 3-year Tübitak 2232 B-International Fellowship for Outstanding Researchers
- February 2022: Joined Koç University, Computer Science and Engineering Department as an Asst. Prof, collaborating closely with KUIS AI
- October 2021: Organized the first multilingual representation workshop at EMNLP 2021 with Duygu Ataman, Alexandra Birch, Alexis Conneau, Orhan Firat, and Sebastian Ruder
Research Experience
- Currently an Asst. Prof. at Koç University, Computer Science and Engineering Department, also affiliated with KUIS AI
- Previously a postdoctoral researcher at UKP, Technical University of Darmstadt
- During PhD studies, visited Institute for Language, Cognition and Computation (ILCC)
Education
- Ph.D. in Computer Engineering from Istanbul Technical University, supervised by Prof. Eşref Adalı
- Postdoctoral researcher at UKP, Technical University of Darmstadt, working with Prof. Iryna Gurevych
- Visiting scholar at Institute for Language, Cognition and Computation (ILCC), working with Prof. Mark Steedman
Background
Research interests include semantics, procedural language; and abstraction and reasoning capabilities of large language models. The goal is to push the boundaries of natural language processing research by building machine learning models and open access tools/datasets for world’s languages.