Paper “Improving Model Fusion by Training-time Neuron Alignment with Fixed Neuron Anchors” accepted by IEEE TPAMI (Oct 2025)
Paper “WISE: Rethinking the Knowledge Memory for Lifelong Model Editing of Large Language Models” accepted at NeurIPS 2024
Paper “Editing as Unlearning: Are Knowledge Editing Methods Strong Baselines for Large Language Model Unlearning?” accepted at NeurIPS 2025 Workshops (Lock-LLM & LLM Evals)
Paper “FedGuCci: Making Local Models More Connected in Landscape for Federated Learning” accepted at KDD 2025
Paper “Revisiting Weighted Aggregation in Federated Learning with Neural Networks” accepted at ICML 2023
Paper “Can We Share Models If Sharing Data Is Not an Option?” published in Patterns (Cell Press)
Paper “No Fear of Classifier Biases: Neural Collapse Inspired Federated Learning with Synthetic and Fixed Classifier” accepted at ICCV 2023
Paper “Towards Effective Clustered Federated Learning: A Peer-to-peer Framework with Adaptive Neighbor Matching” published in IEEE Transactions on Big Data
Paper “Resource-Efficient Knowledge Editing for Mobile LLMs” won Best Poster Award at MobiUK 2025
Invited as Session Chair for KDD 2025
Background
Research Scientist at Tongyi Lab, Alibaba Group
Main research interests include: Large Language Models (LLMs) and agentic intelligence (LLM agents, reasoning, multi-agent systems)