M Saiful Bari
Scholar

M Saiful Bari

Google Scholar ID: xVp48HAAAAAJ
Applied Scientist, Amazon AGI
ASI
Citations & Impact
All-time
Citations
7,219
 
H-index
18
 
i10-index
20
 
Publications
20
 
Co-authors
9
list available
Resume (English only)
Academic Achievements
  • August 2025: AraEval accepted at EMNLP'25; October 2024: Published a technical paper on scalable oversight for frontier-class models; October 2024: Announced ALLaM during the keynote of the Global AI Summit; May 2024: xCodeEval paper accepted at ACL 2024; May 2023: Three papers accepted at ACL'23; December 2022: Pre-print SPT: Semi-Parametric Prompt Tuning for Multitask Prompted Learning; November 2022: Pre-print BLOOM+1: Adding Language Support to BLOOM for Zero-Shot Prompting; November 2022: Returned from Amazon d2l summer internship.
Research Experience
  • He is an Applied Scientist at Amazon AGI, currently studying ablations and scaling laws for Nova Series Models. Previously, he worked on enhancing multilingual capabilities. Before joining Amazon, he was the Training Lead and one of the Core Maintainers of ALLaM, a sovereign foundational model for English and Arabic language technologies.
Background
  • Research interests include Artificial Intelligence, Deep Learning, Natural Language Processing, Large Language Models (LLMs), Multi-lingual NLP (Machine Translation, Cross-lingual tasks), NLP for Programming, LLM Safety/Alignment Research. His research focuses on understanding and advancing large language models, particularly in the areas of scaling, training dynamics, and systematic evaluation of frontier (-> superintelligence) models.
Miscellany
  • Personal interests not mentioned