Binghui Peng
Scholar

Binghui Peng

Google Scholar ID: twlFI3sAAAAJ
Stanford University
Computer Science
Citations & Impact
All-time
Citations
1,278
 
H-index
21
 
i10-index
28
 
Publications
20
 
Co-authors
0
 
Contact
No contact links provided.
Resume (English only)
Academic Achievements
  • Paper 'Theoretical limitations of multi-layer Transformer' published in FOCS 2025, authors: Lijie Chen, Binghui Peng, Hongxun Wu.
  • Paper 'Self-Attention Networks Can Process Bounded Hierarchical Languages' published in ACL 2021, authors: Shunyu Yao, Binghui Peng, Christos Papadimitriou, Karthik Narasimhan.
  • Paper 'Fast swap regret minimization and applications to approximate correlated equilibria' published in STOC 2024, authors: Binghui Peng, Aviad Rubinstein.
  • Paper 'High dimensional online calibration in polynomial time' published in FOCS 2025, author: Binghui Peng.
  • Paper 'Complexity of Equilibria in First-Price Auctions under General Tie-Breaking Rules' published in STOC 2023, authors: Xi Chen, Binghui Peng.
  • Paper 'Memory-Query Tradeoffs for Randomized Convex Optimization' published in FOCS 2023, authors: Xi Chen, Binghui Peng.
  • Paper 'Near Optimal Memory-Regret Tradeoff for Online Learning' published in FOCS 2023, authors: Binghui Peng, Aviad Rubinstein.
  • Paper 'Memory Bounds for Continual Learning' published in FOCS 2022, authors: Xi Chen, Christos Papadimitriou, Binghui Peng.
  • Paper 'The Complexity of Dynamic Least-Squares Regression' published in FOCS 2023, authors: Xi Chen, Binghui Peng.
Research Experience
  • Currently a visiting faculty researcher at Google Research, NYC. Previously, a postdoctoral research fellow at Stanford University (supervised by Aviad Rubinstein and Amin Saberi) and Simons Institute, UC Berkeley.
Education
  • Ph.D. from Columbia University, advised by Xi Chen and Christos Papadimitriou; Bachelor's degree from Yao Class at Tsinghua University.
Background
  • Research interests include machine learning theory, game theory, and theoretical computer science. Recently, particularly interested in the theory for LLM (Large Language Models).
Miscellany
  • Will join the computer science department of the University of Maryland starting from Jan 2026.
Co-authors
0 total
Co-authors: 0 (list not available)