Mert Yuksekgonul
Scholar

Mert Yuksekgonul

Google Scholar ID: s_zrmzUAAAAJ
Stanford University
machine learningdeep learning
Citations & Impact
All-time
Citations
4,748
 
H-index
14
 
i10-index
17
 
Publications
20
 
Co-authors
0
 
Resume (English only)
Academic Achievements
  • 2025, Nature - Optimizing generative AI by backpropagating language model feedback, Authors: Mert Yuksekgonul*, Federico Bianchi*, Joseph Boen*, Sheng Liu*, Pan Lu*, Zhi Huang*, Carlos Guestrin, James Zou.
  • 2023, ICLR (Oral) - When and why vision-language models behave like bags-of-words, and what to do about it?, Authors: Mert Yuksekgonul, Federico Bianchi, Pratyusha (Ria) Kalluri, Dan Jurafsky, James Zou.
  • 2023, NeurIPS - Beyond Confidence: Reliable Models Should Also Quantify Atypicality, Authors: Mert Yuksekgonul, Linjun Zhang, James Zou, Carlos Guestrin.
  • 2024, ICLR - Attention Satisfies: A Constraint-Satisfaction Lens on Factual Errors of Language Models, Authors: Mert Yuksekgonul, Varun Chandrasekaran, Erik Jones, Suriya Gunasekar, Ranjita Naik, Hamid Palangi, Ece Kamar, Besmira Nushi.
  • 2023, ICLR (Spotlight) - Post-hoc Concept Bottleneck Models, Authors: Mert Yuksekgonul, Maggie Wang, James Zou.
  • 2023, Nature Medicine - A visual-language foundation model for pathology image analysis using medical Twitter, Authors: Zhi Huang*, Federico Bianchi*, Mert Yuksekgonul, Thomas J Montine, James Zou.
  • 2022, ICML - Meaningfully debugging model mistakes using conceptual counterfactual explanations, Authors: Abubakar Abid*, Mert Yuksekgonul*, James Zou.
Research Experience
  • Involved in multiple research projects mainly focusing on continuous learning of AI, analysis of the behavior of vision-language models, and optimization of generative AI.
Education
  • PhD: Stanford University, Computer Science, advised by Carlos Guestrin and James Zou.
Background
  • PhD in Computer Science at Stanford, focusing on enabling AI to learn continuously and self-improve, e.g., through test-time training and TextGrad.
Miscellany
  • No specific personal interests or hobbies mentioned.
Co-authors
0 total
Co-authors: 0 (list not available)