π€ AI Summary
This paper addresses the tension between counterfactual explanation of black-box models and user privacy preservation in high-stakes settings, formalizing the Private Counterfactual Retrieval (PCR) problem: enabling a user to securely retrieve the nearest counterfactual instance from an institutional database without revealing their private feature vector. Method: We propose the first information-theoretically secure PCR framework, extended to I-PCR for immutable-feature constraints, supporting user-preference-guided, actionable explanations. Our solution integrates privacy-preserving nearest-neighbor search with lightweight cryptographic protocols, enabling multi-stage, controllably leaky retrieval. Results: Experiments demonstrate that our approach guarantees zero information leakage about the userβs input while significantly improving counterfactual quality and retrieval efficiency. Notably, we provide the first quantitative assessment of database-side privacy leakage boundaries across competing methods.
π Abstract
Transparency and explainability are two important aspects to be considered when employing black-box machine learning models in high-stake applications. Providing counterfactual explanations is one way of catering this requirement. However, this also poses a threat to the privacy of the institution that is providing the explanation, as well as the user who is requesting it. In this work, we are primarily concerned with the user's privacy who wants to retrieve a counterfactual instance, without revealing their feature vector to the institution. Our framework retrieves the exact nearest neighbor counterfactual explanation from a database of accepted points while achieving perfect, information-theoretic, privacy for the user. First, we introduce the problem of private counterfactual retrieval (PCR) and propose a baseline PCR scheme that keeps the user's feature vector information-theoretically private from the institution. Building on this, we propose two other schemes that reduce the amount of information leaked about the institution database to the user, compared to the baseline scheme. Second, we relax the assumption of mutability of all features, and consider the setting of immutable PCR (I-PCR). Here, the user retrieves the nearest counterfactual without altering a private subset of their features, which constitutes the immutable set, while keeping their feature vector and immutable set private from the institution. For this, we propose two schemes that preserve the user's privacy information-theoretically, but ensure varying degrees of database privacy. Third, we extend our PCR and I-PCR schemes to incorporate user's preference on transforming their attributes, so that a more actionable explanation can be received. Finally, we present numerical results to support our theoretical findings, and compare the database leakage of the proposed schemes.