Mehrdad Farajtabar
Scholar

Mehrdad Farajtabar

Google Scholar ID: shkKxnQAAAAJ
Research Scientist at Apple
Machine LearningLarge Language ModelsMultimodal ModelsEfficient MLContinual Learning
Citations & Impact
All-time
Citations
7,364
 
H-index
33
 
i10-index
45
 
Publications
20
 
Co-authors
66
list available
Resume (English only)
Academic Achievements
  • Area Chair for ICLR 2024, 2025, NeurIPS 2025, CVPR 2024; Program Committee/Reviewer for conferences such as NeurIPS, AISTATS, ICML, ICLR, UAI, WWW, AAAI, WSDM, ASONAM, IJCAI.
Research Experience
  • Senior Manager of Research at Apple (2022 - Present), focusing on applied research and innovation in Large Language and Vision Models; Senior Research Scientist at DeepMind (2018 - 2022), working on applied research in deep learning and reinforcement learning.
Education
  • PhD in Computational Science and Engineering from Georgia Institute of Technology, 2013-2018, supervised by Hongyuan Zha and Le Song; MSc in Artificial Intelligence from Sharif University of Technology, 2009-2011; BSc in Software Engineering from Sharif University of Technology, 2005-2009.
Background
  • Currently a senior research manager at Apple, leading a team to understand and improve the reasoning and planning capabilities of large language models (LLMs). His goal is to close the reasoning and intelligence gap between frontier models and human genuine reasoning (General Intelligence). He also works on optimizing LLMs for on-device and efficient inference. In general, he likes to work on understanding and demystifying how large vision and language models work and learn, in order to find more accurate and efficient pre-training and fine-tuning architectures, algorithms, and strategies.
Miscellany
  • Recent interests include Large Language Models, Inference Efficiency, LLM Reasoning, Planning and Generalization, Vision-Language Models, Foundation Models, Efficient Model Training, Continual and Lifelong Learning, Multitask and Transfer Learning, Meta Learning.