Released Deep Think with Confidence; organized the first workshop on Efficient Reasoning at NeurIPS 2025; gave guest lectures at Princeton University and Rice University; presented work on Memory-Efficient LLM Training at MLSys'24.
Research Experience
Works as a Research Scientist at Meta FAIR; research projects include modern optimization algorithms (e.g., GaLore, GaLore 2, signSGD-MV), efficient LLM reasoning and large-scale reinforcement learning (e.g., DeepConf, GRESO, M2PO), and foundations of deep learning, quantization, and efficient inference.
Education
Received Ph.D. from Caltech.
Background
Research interests include optimization, reasoning, and efficiency; specializes in uncovering statistical principles related to Large Language Models (LLMs), aiming to develop theoretically grounded, scalable, and practically efficient algorithms.
Miscellany
Personal email: jwzzhao@gmail.com; Work email: jwzhao@meta.com