Unlearning for Federated Online Learning to Rank: A Reproducibility Study

📅 2025-05-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the model unlearning challenge in Federated Online Learning to Rank (FOLTR) under the “right to be forgotten.” It systematically evaluates five unlearning strategies across under-unlearning and over-unlearning scenarios. To overcome the limitations of single-metric evaluation, we propose a dual-dimensional assessment framework—diversity–utility—that integrates gradient rollback, influence function estimation, and model fine-tuning, enabling the first multi-dimensional, verifiable, and quantitative analysis of unlearning efficacy in FOLTR. Experiments uncover inherent trade-offs between privacy preservation and ranking performance across strategies. We publicly release all code and datasets, substantially enhancing reproducibility, credibility, and practical applicability of unlearning evaluation in federated ranking systems.

Technology Category

Application Category

📝 Abstract
This paper reports on findings from a comparative study on the effectiveness and efficiency of federated unlearning strategies within Federated Online Learning to Rank (FOLTR), with specific attention to systematically analysing the unlearning capabilities of methods in a verifiable manner. Federated approaches to ranking of search results have recently garnered attention to address users privacy concerns. In FOLTR, privacy is safeguarded by collaboratively training ranking models across decentralized data sources, preserving individual user data while optimizing search results based on implicit feedback, such as clicks. Recent legislation introduced across numerous countries is establishing the so called"the right to be forgotten", according to which services based on machine learning models like those in FOLTR should provide capabilities that allow users to remove their own data from those used to train models. This has sparked the development of unlearning methods, along with evaluation practices to measure whether unlearning of a user data successfully occurred. Current evaluation practices are however often controversial, necessitating the use of multiple metrics for a more comprehensive assessment -- but previous proposals of unlearning methods only used single evaluation metrics. This paper addresses this limitation: our study rigorously assesses the effectiveness of unlearning strategies in managing both under-unlearning and over-unlearning scenarios using adapted, and newly proposed evaluation metrics. Thanks to our detailed analysis, we uncover the strengths and limitations of five unlearning strategies, offering valuable insights into optimizing federated unlearning to balance data privacy and system performance within FOLTR. We publicly release our code and complete results at https://github.com/Iris1026/Unlearning-for-FOLTR.git.
Problem

Research questions and friction points this paper is trying to address.

Evaluating federated unlearning strategies in FOLTR
Assessing under-unlearning and over-unlearning scenarios
Balancing data privacy and system performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated unlearning strategies for privacy
Multiple metrics for comprehensive unlearning evaluation
Balancing data privacy and system performance
🔎 Similar Papers
No similar papers found.