- Luo Luo, Xue Cui, Tingkai Jia, Cheng Chen. Decentralized Stochastic Nonconvex Optimization under the Relaxed Smoothness. arXiv preprint:2509.08726, 2025.
- Lesi Chen, Chengchang Liu, Luo Luo, Jingzhao Zhang. Computationally Faster Newton Methods by Lazy Evaluations. arXiv preprint:2501.17488, 2025.
- Zhiling Zhou, Zhuanghua Liu, Chengchang Liu, Luo Luo. Incremental Gauss–Newton Methods with Superlinear Convergence Rates. arXiv preprint:2407.03195, 2024.
- Chengchang Liu, Cheng Chen, Luo Luo. Symmetric Rank-k Methods. arXiv preprint:2303.16188, 2023.
- Chengchang Liu, Luo Luo. Regularized Newton Methods for Monotone Variational Inequalities with Hölder Continuous Jacobians. arXiv preprint:2212.07824, 2022.
- Luo Luo, Yunyan Bai, Lesi Chen, Yuxing Liu, Haishan Ye. On the Complexity of Decentralized Smooth Nonconvex Finite-Sum Optimization. arXiv preprint:2210.13931, 2022.
Publications:
- Hongxu Chen, Ke Wei, Haishan Ye, Luo Luo. A Near-Optimal Algorithm for Decentralized Convex-Concave Finite-Sum Minimax Optimization. Advances in Neural Information Processing Systems (NeurIPS), 2025. Spotlight.
- Binbin Huang, Luo Luo, Yanghua Xiao, Deqing Yang, Baojian Zhou. Accelerated Evolving Set Processes for Local PageRank Computation. Advances in Neural Information Processing Systems (NeurIPS), 2025.
- Lesi Chen, Chengchang Liu, Luo Luo, Jingzhao Zhang. Solving Convex-Concave Problems with ˜O(ϵ^−4/7) Second-Order Oracle Complexity. Conference on Learning Theory (COLT), 2025. Best Student Paper Award.
- Kunjie Ren, Luo Luo. A Parameter-Free and Near-Optimal Zeroth-Order Algorithm for Stochastic Convex Optimization. International Conference on Machine Learning (ICML), 2025.
- Chengchang Liu, Luo Luo, John C.S. Lui. An Enhanced Levenberg–Marquardt Method via Gram Reduction. AAAI Conference on Artificial Intelligence (AAAI), 2025.
- Lesi Chen, Luo Luo. Near-Optimal Algorithms for Making the Gradient Small in Stochastic Minimax Optimization. Journal of Machine Learning Research (JMLR), 25(387):1−44, 2024.
- Qihao Zhou, Haishan Ye, Luo Luo. Near-Optimal Distributed Minimax Optimization under the Second-Order Similarity. Advances in Neural Information Processing Systems (NeurIPS), 2024.
- Zhuanghua Liu, Luo Luo, Bryan Kian Hsiang Low. Gradient-Free Methods for Nonconvex Nonsmooth Stochastic Compositional Optimization. Advances in Neural Information Processing Systems (NeurIPS), 2024.
- Shihong Ding, Long Yang, Luo Luo, Cong Fang. Optimizing over Multiple Distributions under Generalized Quasar-Convexity Condition. Advances in Neural Information Processing Systems (NeurIPS), 2024.
- Zhuanghua Liu, Cheng Chen, Luo Luo, Bryan Kian Hsiang Low. Zeroth-Order Methods for Constrained Nonconvex Nonsmooth Stochastic Optimization. International Conference on Machine Learning (ICML), 2024. Oral.
- Yunyan Bai, Yuxing Liu, Luo Luo. On the Complexity of Finite-Sum Smooth Optimization under the Polyak–Łojasiewicz Condition. International Conference on Machine Learning (ICML), 2024. Spotlight.
- Yuxing Liu, Lesi Chen, Luo Luo. Decentralized Convex Finite-Sum Optimization with Better Dependence on Condition Numbers. International Conference on Machine Learning (ICML), 2024.
- Lesi Chen, Haishan Ye, Luo Luo. An Efficient Stochastic Algorithm for Decentralized Nonconvex-Strongly-Concave Minimax Optimization. International Conference on Artificial Intelligence and Statistics (AISTATS), 2024.
- Zhuanghua Liu, Luo Luo, Bryan Kian Hsiang Low. Incremental Quasi-Newton Methods with Faster Superlinear Convergence Rates. AAAI