FedDetox: Robust Federated SLM Alignment via On-Device Data Sanitization

πŸ“… 2026-04-08
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the vulnerability of small language models in federated learning to data poisoning attacks stemming from malicious or unsafe client data, which can compromise safety alignment. To mitigate this threat under the constraints of resource-limited edge devices, the authors propose a local data sanitization mechanism that deploys a lightweight safety classifier via knowledge distillation. This classifier identifies unsafe samples and replaces them withζ‹’η­” (refusal) templates, effectively transforming potentially toxic inputs into positive safety signals. The approach uniquely integrates on-device data cleansing with federated alignment, achieving model safety comparable to centralized baselines without degrading general performance. As a result, it significantly enhances the robustness of federated small language models against unintended data poisoning while preserving utility.
πŸ“ Abstract
As high quality public data becomes scarce, Federated Learning (FL) provides a vital pathway to leverage valuable private user data while preserving privacy. However, real-world client data often contains toxic or unsafe information. This leads to a critical issue we define as unintended data poisoning, which can severely damage the safety alignment of global models during federated alignment. To address this, we propose FedDetox, a robust framework tailored for Small Language Models (SLMs) on resource-constrained edge devices. We first employ knowledge distillation to transfer sophisticated safety alignment capabilities from large scale safety aligned teacher models into light weight student classifiers suitable for resource constrained edge devices. Specifically, during federated learning for human preference alignment, the edge client identifies unsafe samples at the source and replaces them with refusal templates, effectively transforming potential poisons into positive safety signals. Experiments demonstrate that our approach preserves model safety at a level comparable to centralized baselines without compromising general utility.
Problem

Research questions and friction points this paper is trying to address.

federated learning
data poisoning
safety alignment
toxic data
small language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated Learning
Small Language Models
Data Sanitization
Knowledge Distillation
Safety Alignment
πŸ”Ž Similar Papers
No similar papers found.
S
Shunan Zhu
The University of Tokyo, Tokyo, Japan
J
Jiawei Chen
The University of Tokyo, Tokyo, Japan
Y
Yonghao Yu
The University of Tokyo, Tokyo, Japan
Hideya Ochiai
Hideya Ochiai
The University of Tokyo
Distributed AICyber SecurityIoT