PPC-GPT: Federated Task-Specific Compression of Large Language Models via Pruning and Chain-of-Thought Distillation

📅 2025-02-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the dual challenges of domain-knowledge privacy preservation and model lightweighting for LLM deployment in resource-constrained settings, this paper proposes a federated task-customized compression framework. The framework introduces a novel synergistic paradigm integrating differential privacy (ε = 2.0), LLM-driven chain-of-thought (CoT) synthetic data generation, structured pruning, and CoT-aware knowledge distillation: privacy-sensitive data perturbation and synthesis occur locally on clients, while efficient compression is performed centrally at the server. Evaluated across multiple text generation tasks, the resulting small language model (SLM) achieves over 92% of the original LLM’s performance while reducing communication overhead by 5.3×—significantly outperforming existing federated compression approaches. Our core contribution is the first end-to-end federated compression paradigm that simultaneously ensures rigorous privacy guarantees, high-fidelity domain-knowledge retention, and computational efficiency.

Technology Category

Application Category

📝 Abstract
Compressing Large Language Models (LLMs) into task-specific Small Language Models (SLMs) encounters two significant challenges: safeguarding domain-specific knowledge privacy and managing limited resources. To tackle these challenges, we propose PPC-GPT, a innovative privacy-preserving federated framework specifically designed for compressing LLMs into task-specific SLMs via pruning and Chain-of-Thought (COT) distillation. PPC-GPT works on a server-client federated architecture, where the client sends differentially private (DP) perturbed task-specific data to the server's LLM. The LLM then generates synthetic data along with their corresponding rationales. This synthetic data is subsequently used for both LLM pruning and retraining processes. Additionally, we harness COT knowledge distillation, leveraging the synthetic data to further improve the retraining of structurally-pruned SLMs. Our experimental results demonstrate the effectiveness of PPC-GPT across various text generation tasks. By compressing LLMs into task-specific SLMs, PPC-GPT not only achieves competitive performance but also prioritizes data privacy protection.
Problem

Research questions and friction points this paper is trying to address.

Compress LLMs into task-specific SLMs.
Protect domain-specific knowledge privacy.
Manage limited computational resources effectively.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated task-specific compression
Pruning and Chain-of-Thought distillation
Differentially private data perturbation
🔎 Similar Papers
No similar papers found.
Tao Fan
Tao Fan
Scichuan University of Science and Engineering
Synchronization of complex networksConsensus of multi-agent systemsWireless sensor networks
G
Guoqiang Ma
WeBank Co., Ltd, Shenzhen, China
Yuanfeng Song
Yuanfeng Song
Unknown affiliation
NLP4DataData VisualizationText2SQLLLM
Lixin Fan
Lixin Fan
WeBank
Computer visionmachine learningartificial intelligencefederated learning
K
Kai Chen
The Hong Kong University of Science and Technology, Hong Kong, China
Q
Qiang Yang
The Hong Kong University of Science and Technology, Hong Kong, China; WeBank Co., Ltd, Shenzhen, China