Browse publications on Google Scholar (top-right) ↗
Resume (English only)
Academic Achievements
Publications: 'Constrained Optimization From a Control Perspective via Feedback Linearization' accepted to NeurIPS 2025, 'Optimism as Risk-Seeking in Multi-Agent Reinforcement Learning' new paper, 'On the Optimal Control of Network LQR with Spatially-exponential Decaying Structure' accepted to Automatica, 'Scalable Spectral Representations for Multi-agent Reinforcement Learning in Network MDPs' accepted to AISTATS 2025, 'Soft Robust MDPs and Risk-Sensitive MDPs: Equivalence, Policy Gradient, and Sample Complexity' accepted to ICLR, 'Gradient play in stochastic games: stationary points, convergence, and sample complexity' accepted to Transaction of Automatic Control (TAC).
Awards: Selected for the Rising Stars program at the 2025 Northeast Robotics Colloquium (NERC), EECS rising star in 2024, recipient of the MIT Postdoctoral Fellowship for Engineering Excellence.
Research Experience
Postdoc for Engineering Excellence at MIT, working with Prof. Asu Ozdaglar and Prof. Gioele Zardini.
Education
Ph.D.: Harvard University, School of Engineering and Applied Sciences, Advisor: Prof. Na Li; B.S.: Peking University, Department of Mathematics, Scientific and Engineering Computing, Graduated in 2019.
Background
Research Interests: Reinforcement learning, control theory, machine learning, multi-agent systems. Background: Dedicated to research on learning, control, and decision-making in multi-agent systems, aiming to design scalable, efficient, and provable learning/control algorithms that address challenges such as communication constraints, strategic behavior, and model uncertainty.
Miscellany
Personal interests and hobbies not specifically mentioned.