Probing then Editing: A Push-Pull Framework for Retain-Free Machine Unlearning in Industrial IoT

📅 2025-11-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of selective forgetting of outdated or erroneous knowledge in Industrial Internet of Things (IIoT) models—under constraints of data silos and privacy compliance—this paper proposes PTE, the first machine unlearning framework that requires no access to original training data. Methodologically, PTE employs gradient ascent to probe decision boundaries and leverages model self-prediction to generate editing instructions; introduces a push-pull collaborative optimization mechanism to precisely erase target-class knowledge while preserving non-target knowledge; and incorporates masked knowledge distillation to maintain model utility. Evaluated on industrial (e.g., CWRU, SCUT-FD) and general benchmarks, PTE significantly outperforms state-of-the-art methods, achieving an optimal trade-off between forgetting accuracy and post-unlearning model performance—while ensuring privacy preservation and computational efficiency.

Technology Category

Application Category

📝 Abstract
In dynamic Industrial Internet of Things (IIoT) environments, models need the ability to selectively forget outdated or erroneous knowledge. However, existing methods typically rely on retain data to constrain model behavior, which increases computational and energy burdens and conflicts with industrial data silos and privacy compliance requirements. To address this, we propose a novel retain-free unlearning framework, referred to as Probing then Editing (PTE). PTE frames unlearning as a probe-edit process: first, it probes the decision boundary neighborhood of the model on the to-be-forgotten class via gradient ascent and generates corresponding editing instructions using the model's own predictions. Subsequently, a push-pull collaborative optimization is performed: the push branch actively dismantles the decision region of the target class using the editing instructions, while the pull branch applies masked knowledge distillation to anchor the model's knowledge on retained classes to their original states. Benefiting from this mechanism, PTE achieves efficient and balanced knowledge editing using only the to-be-forgotten data and the original model. Experimental results demonstrate that PTE achieves an excellent balance between unlearning effectiveness and model utility across multiple general and industrial benchmarks such as CWRU and SCUT-FD.
Problem

Research questions and friction points this paper is trying to address.

Selectively forgetting outdated knowledge in IIoT models without retain data
Eliminating computational burdens from data retention in industrial environments
Achieving privacy-compliant machine unlearning while maintaining model utility
Innovation

Methods, ideas, or system contributions that make the work stand out.

Probe-edit process via gradient ascent predictions
Push-pull optimization dismantles decision regions
Masked distillation anchors retained class knowledge
J
Jiao Chen
Shien-Ming Wu School of Intelligent Engineering, South China University of Technology, Guangzhou, China
W
Weihua Li
School of Mechanical and Automotive Engineering, South China University of Technology, Guangzhou, China
Jianhua Tang
Jianhua Tang
Shien-Ming Wu School of Intelligent Engineering, South China University of Technology
6GEdge ComputingNetwork SlicingIndustrial Internet of ThingsIndustrial AI