🤖 AI Summary
This study exposes systemic privacy risks in large-scale web crawling for machine learning datasets: empirical auditing reveals that over 12% of samples in mainstream datasets retain personally identifiable information (PII), and current “de-identification” practices fall significantly short of GDPR/CCPA legal anonymization standards. Methodologically, the work pioneers an integrated approach coupling large-scale PII detection, textual metadata analysis, and rigorous legal interpretation of privacy regulation. It demonstrates that the prevailing legal doctrine of “publicly available” data is fundamentally ill-suited to AI training contexts, creating critical regulatory gaps. The primary contribution is a normative proposal to reconceptualize the legal definition of “public data” to constrain indiscriminate web scraping; it advocates shifting AI data governance from technical self-regulation toward law-embedded frameworks. Findings provide empirically grounded, legally actionable foundations for compliant data curation and regulatory oversight.
📝 Abstract
We investigate the contents of web-scraped data for training AI systems, at sizes where human dataset curators and compilers no longer manually annotate every sample. Building off of prior privacy concerns in machine learning models, we ask: What are the legal privacy implications of web-scraped machine learning datasets? In an empirical study of a popular training dataset, we find significant presence of personally identifiable information despite sanitization efforts. Our audit provides concrete evidence to support the concern that any large-scale web-scraped dataset may contain personal data. We use these findings of a real-world dataset to inform our legal analysis with respect to existing privacy and data protection laws. We surface various privacy risks of current data curation practices that may propagate personal information to downstream models. From our findings, we argue for reorientation of current frameworks of"publicly available"information to meaningfully limit the development of AI built upon indiscriminate scraping of the internet.