Browse publications on Google Scholar (top-right) ↗
Resume (English only)
Academic Achievements
Publications:
- "DecompX: Explaining Transformers Decisions by Propagating Token Decomposition" accepted to ACL 2023.
- "BERT on a Data Diet: Finding Important Examples by Gradient-Based Pruning" accepted to ENLSP@NeurIPS2022.
- "GlobEnc: Quantifying Global Token Attribution by Incorporating the Whole Encoder Layer in Transformers" accepted to NAACL 2022 main conference.
- "Metaphors in Pre-Trained Language Models: Probing and Generalization Across Datasets and Languages" accepted to ACL 2022 main conference.
- "Not All Models Localize Linguistic Knowledge in the Same Place: A Layer-wise Probing on BERToids’ Representations" accepted to EMNLP 2021 (BlackboxNLP).
Research Experience
Conducting NLP-related research at UCLA, involved in multiple research projects.
Education
Ph.D. student in Computer Science at UCLA, advised by Prof. Nanyun (Violet) Peng; Master's degree from the University of Tehran, advised by Prof. Yadollah Yaghoobzadeh and Prof. Mohammad Taher Pilehvar.
Background
Ph.D. student in Computer Science with a primary research interest in Natural Language Processing (NLP), including quantifying token attribution in Transformers, metaphors in pre-trained language models, layer-wise probing of BERToids, and gradient-based dataset pruning to find important examples.
Miscellany
Personal website built using Jekyll & AcademicPages.