Samet Oymak
Scholar

Samet Oymak

Google Scholar ID: AY6InkoAAAAJ
University of Michigan | Google Research
machine learningdecision makingstatisticsoptimizationlanguage models
Citations & Impact
All-time
Citations
4,971
 
H-index
36
 
i10-index
67
 
Publications
20
 
Co-authors
20
list available
Resume (English only)
Academic Achievements
  • Senior Area Chair for ICML 2025 (Seoul)
  • 4 papers accepted to NeurIPS 2025, including 'BREAD', 'Attention with Trained Embeddings Provably Selects Important Tokens', etc.
  • Multiple ICML 2025 publications, including spotlight papers 'Everything Everywhere All at Once' and 'Test-Time Training Provably Improves Transformers as In-context Learners'
  • ICLR 2025 spotlight paper: 'High-dimensional Analysis of Knowledge Distillation'
  • 2 papers at AAAI 2025, with 'On the Power of Convolution Augmented Transformer' selected for oral presentation
  • CVPR 2025 paper: 'AdMiT: Adaptive Multi-Source Tuning in Dynamic Environments'
  • AISTATS 2025: 'Provable Benefits of Task-Specific Prompts for In-context Learning'
  • 4 papers at NeurIPS 2024, including 'Selective Attention' and 'Efficient Contextual LLM Cascades'
  • ICML 2024 papers: 'Self-Attention <=> Markov Models' and 'Can Mamba Learn How to Learn?'
  • Publications at AISTATS 2024, AAAI 2024, WACV 2024
  • NeurIPS 2023 spotlight paper: 'Max-Margin Token Selection in Attention Mechanism'
  • Invited talks at USC, INFORMS, Yale, Google NYC, and Harvard on transformer theory
Miscellany
  • Teaching: EECS 498 'Foundations of LLMs', EECS 553 'Machine Learning'
  • Encourages IMO/IOI/USAMO medalists to reach out for AI reasoning research opportunities
  • PhD students Yingcong and Xiangyu graduated; Mingchen joined Meta as a Research Scientist
  • 2023 interns admitted to PhD programs at UC Berkeley, Harvard, and UIUC
  • July–August 2024 travel: UC Riverside, ICML (Vancouver), JSM (Nashville), UW-IFDS (Seattle)