🤖 AI Summary
This study addresses a fundamental tension between differential privacy and data valuation: the former requires insensitivity to individual records, while the latter demands precise quantification of each data point’s contribution. The work systematically analyzes how mainstream valuation methods—such as Shapley values and influence functions—fail under differential privacy constraints, identifying high-sensitivity components within these algorithms. It proposes design principles for privacy-friendly valuation mechanisms and employs sensitivity analysis alongside privacy-utility trade-off evaluations to reveal the limitations of current approaches in preserving the discriminative power of rare samples. By delineating the feasible boundaries of private data valuation, this research lays a theoretical foundation for developing practical mechanisms that jointly uphold privacy guarantees and valuation utility.
📝 Abstract
Data valuation methods quantify how individual training examples contribute to a model's behavior, and are increasingly used for dataset curation, auditing, and emerging data markets. As these techniques become operational, they raise serious privacy concerns: valuation scores can reveal whether a person's data was included in training, whether it was unusually influential, or what sensitive patterns exist in proprietary datasets. This motivates the study of privacy-preserving data valuation. However, privacy is fundamentally in tension with valuation utility under differential privacy (DP). DP requires outputs to be insensitive to any single record, while valuation methods are explicitly designed to measure per-record influence. As a result, naive privatization often destroys the fine-grained distinctions needed to rank or attribute value, particularly in heterogeneous datasets where rare examples exert outsized effects. In this work, we analyze the feasibility of DP-compatible data valuation. We identify the core algorithmic primitives across common valuation frameworks that induce prohibitive sensitivity, explaining why straightforward DP mechanisms fail. We further derive design principles for more privacy-amenable valuation procedures and empirically characterize how privacy constraints degrade ranking fidelity across representative methods and datasets. Our results clarify the limits of current approaches and provide a foundation for developing valuation methods that remain useful under rigorous privacy guarantees.