🤖 AI Summary
This work addresses multi-party classification scenarios—such as e-discovery—where a requester must retrieve nearly all relevant documents while strictly bounding the disclosure of irrelevant ones. We introduce the novel *Leave-One-Out dimension*, the minimal number of non-responsive documents required for verifiable validation. We characterize a three-phase phase transition in the relationship between linear classifier margins and disclosure volume: *O(1)*, *Ω(d)*, and *Ω(e^d)*. Leveraging combinatorial learning theory and robust modeling, we design a verifiable fault-tolerant protocol that achieves a tight upper bound on disclosure—exactly matching the Leave-One-Out dimension—under achievable assumptions. To our knowledge, this is the first protocol for privacy-sensitive multi-party classification that simultaneously attains theoretical optimality and end-to-end verifiability.
📝 Abstract
We consider the multi-party classification problem introduced by Dong, Hartline, and Vijayaraghavan (2022) motivated by electronic discovery. In this problem, our goal is to design a protocol that guarantees the requesting party receives nearly all responsive documents while minimizing the disclosure of nonresponsive documents. We develop verification protocols that certify the correctness of a classifier by disclosing a few nonresponsive documents. We introduce a combinatorial notion called the Leave-One-Out dimension of a family of classifiers and show that the number of nonresponsive documents disclosed by our protocol is at most this dimension in the realizable setting, where a perfect classifier exists in this family. For linear classifiers with a margin, we characterize the trade-off between the margin and the number of nonresponsive documents that must be disclosed for verification. Specifically, we establish a trichotomy in this requirement: for $d$ dimensional instances, when the margin exceeds $1/3$, verification can be achieved by revealing only $O(1)$ nonresponsive documents; when the margin is exactly $1/3$, in the worst case, at least $Omega(d)$ nonresponsive documents must be disclosed; when the margin is smaller than $1/3$, verification requires $Omega(e^d)$ nonresponsive documents. We believe this result is of independent interest with applications to coding theory and combinatorial geometry. We further extend our protocols to the nonrealizable setting defining an analogous combinatorial quantity robust Leave-One-Out dimension, and to scenarios where the protocol is tolerant to misclassification errors by Alice.