🤖 AI Summary
To address the high computational cost, model accuracy degradation, and privacy–utility trade-off inherent in machine unlearning—particularly when deleting user data—this paper proposes a service-provider-initiated auction mechanism for redeeming data usage rights. Unlike conventional approaches, the mechanism is proactively launched by the service provider to purchase data usage rights from voluntarily exiting users, without requiring prior knowledge of individual privacy preferences. Grounded in mechanism design and auction theory, it simultaneously ensures compliance with GDPR/CCPA, economic efficiency, and model stability. Empirical evaluations demonstrate that the method significantly reduces unlearning costs, mitigates model performance deterioration, and enhances social welfare efficiency. The core contribution is the first-ever buyer-driven dynamic redemption auction framework, enabling synergistic optimization of privacy protection, regulatory compliance, and system utility.
📝 Abstract
The rapid growth of artificial intelligence (AI) has raised privacy concerns over user data, leading to regulations like the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). With the essential toolbox provided by machine unlearning, AI service providers are now able to remove user data from their trained models as well as the training datasets, so as to comply with such regulations. However, extensive data redemption can be costly and degrade model accuracy. To balance the cost of unlearning and the privacy protection, we propose a buyer-initiated auction mechanism for data redemption, enabling the service provider to purchase data from willing users with appropriate compensation. This approach does not require the server to have any a priori knowledge about the users' privacy preference, and provides an efficient solution for maximizing the social welfare in the investigated problem.