Data Reliability Scoring

📅 2025-10-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the problem of assessing dataset reliability in the absence of ground-truth labels. To mitigate systematic biases arising from strategic data sources, we propose a Gram-determinant-based reliability scoring method: it models observed data alongside outcomes from multiple statistical experiments as a distributional vector and quantifies data quality via the volume of the subspace spanned by these vectors. Our method is the unique reliability score satisfying experiment-invariance—guaranteeing consistent dataset ranking across diverse experimental configurations. Experiments on synthetic noisy models, CIFAR-10 embeddings, and real-world employment data demonstrate that the proposed metric robustly captures bias magnitude under heterogeneous observation mechanisms, significantly outperforming existing unsupervised evaluation baselines.

Technology Category

Application Category

📝 Abstract
How can we assess the reliability of a dataset without access to ground truth? We introduce the problem of reliability scoring for datasets collected from potentially strategic sources. The true data are unobserved, but we see outcomes of an unknown statistical experiment that depends on them. To benchmark reliability, we define ground-truth-based orderings that capture how much reported data deviate from the truth. We then propose the Gram determinant score, which measures the volume spanned by vectors describing the empirical distribution of the observed data and experiment outcomes. We show that this score preserves several ground-truth based reliability orderings and, uniquely up to scaling, yields the same reliability ranking of datasets regardless of the experiment -- a property we term experiment agnosticism. Experiments on synthetic noise models, CIFAR-10 embeddings, and real employment data demonstrate that the Gram determinant score effectively captures data quality across diverse observation processes.
Problem

Research questions and friction points this paper is trying to address.

Assessing dataset reliability without ground truth access
Measuring deviation from truth using strategic data sources
Developing experiment-agnostic reliability scoring methodology
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes Gram determinant score for dataset reliability
Measures volume spanned by data and outcome vectors
Achieves experiment-agnostic reliability ranking across datasets
🔎 Similar Papers
No similar papers found.