As of September 2025, several papers have been accepted or published in top conferences, including but not limited to: NeurIPS 2025 accepted two papers, one on theoretical foundations for Analog in-memory training and another on efficient bilevel optimization for LLM fine-tuning; ICML 2025 also accepted two papers, one on hyperparameter tuning for diffusion models and another on preference-guided multi-objective optimization; Additionally, a paper on penalty-based bilevel gradient descent method was published in Mathematical Programming (Series A).
Research Experience
Prior to joining Cornell, conducted research under the guidance of his advisor at Rensselaer Polytechnic Institute. Current research focuses on the design of optimization algorithms and their applications in generative models and energy-efficient analog devices.
Education
PhD in Electrical and Computer Engineering, Cornell University (Cornell Tech), New York, NY, 2025 - present; PhD in Electrical, Computer, and Systems Engineering, Rensselaer Polytechnic Institute, Troy, NY, 2021 - 2025; B.S. in Statistics, University of Science and Technology of China, Hefei, China, 2016 - 2020. Advisor: Prof. Tianyi Chen.
Background
Currently a PhD student in Electrical and Computer Engineering at Cornell University, focusing on optimization and machine learning, particularly bilevel and multi-objective optimization to develop efficient algorithms that advance two key frontiers of AI: Generative Models (such as multimodal large language models, diffusion models) and Next-Generation AI Computing (emerging energy-efficient analog devices). The goal is to bridge theory and practice, making AI models more efficient and scalable while designing algorithms tailored for emerging AI hardware computing architectures.
Miscellany
Contact: quanx1808@gmail.com Address: 2 W Loop Rd, New York, NY, 10044