SIGMOD 2024: F3KM: Federated, Fair, and Fast k-means (First author)
KDD 2025: FedAPM: Federated Learning via ADMM with Partial Model Personalization (First author)
VLDB 2026: Highly-Efficient Large-Scale k-means with Individual Fairness (First author)
VLDB 2025: Federated and Balanced Clustering for High-dimensional Data (Co-first author)
Serves as a reviewer for JMLR and IEEE TKDE
National Scholarship, Wuhan University, 2023
DiDi Scholarship Second Prize, Wuhan University, 2024
DiDi Scholarship Third Prize, Wuhan University, 2025
Background
Ph.D. student at School of Computer Science, Wuhan University. Research interests include optimization theory, large language model training & fine-tuning, federated learning, and clustering algorithms.
Focuses on convex and non-convex optimization algorithms and their convergence analysis, with applications in machine learning and deep learning.
Studies efficient training methods for large language models and parameter-efficient fine-tuning techniques.
Explores privacy-preserving distributed machine learning, federated optimization algorithms, and communication-efficient training strategies in heterogeneous environments.
Develops advanced clustering methods (e.g., spectral clustering) for large-scale data analysis.