Estimating Absolute Web Crawl Coverage From Longitudinal Set Intersections

πŸ“… 2026-03-16
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study addresses the challenge of accurately estimating the absolute coverage of a web crawler over the crawlable URL space in the absence of external ground-truth data. The authors propose a statistical method that relies solely on longitudinal crawl data from a single crawler. By analyzing the intersections of URLs across multiple consecutive crawls, they formulate an urn-model-based estimation framework and employ linear regression to infer the coverage ratio. Notably, the approach requires neither external benchmarks nor comparisons across multiple crawlers, making it applicable to any focused longitudinal crawling scenario. Experiments on 15 semi-annual crawls of the German academic web from 2013 to 2021 demonstrate that, under stable configurations, the crawler achieves approximately 46% coverage, thereby validating the method’s effectiveness and practical utility.

Technology Category

Application Category

πŸ“ Abstract
Web archives preserve portions of the web, but quantifying their completeness remains challenging. Prior approaches have estimated the coverage of a crawl by either comparing the outcomes of multiple crawlers, or by comparing the results of a single crawl to external ground truth datasets. We propose a method to estimate the absolute coverage of a crawl using only the archive's own longitudinal data, i.e., the data collected by multiple subsequent crawls. Our key insight is that coverage can be estimated from the empirical URL overlaps between subsequent crawls, which are in turn well described by a simple urn process. The parameters of the urn model can then be inferred from longitudinal crawl data using linear regression. Applied to our focused crawl configuration of the German Academic Web, with 15 semi-annual crawls between 2013-2021, we find a coverage of approximately 46 percent of the crawlable URL space for the stable crawl configuration regime. Our method is extremely simple, requires no external ground truth, and generalizes to any longitudinal focused crawl.
Problem

Research questions and friction points this paper is trying to address.

web crawl coverage
longitudinal data
absolute coverage estimation
web archives
URL overlap
Innovation

Methods, ideas, or system contributions that make the work stand out.

web crawl coverage
longitudinal data
urn model
URL overlap
focused crawling
πŸ”Ž Similar Papers
No similar papers found.
M
Michael Paris
Common Crawl Foundation
G
Grigori Paris
Independent Researcher
Fabian Baumann
Fabian Baumann
University of Pennsylvania
computational social sciencenetworkscultural evolution