Abstract: Web measurement studies can shed light on not yet fully understood phenomena and thus are essential for analyzing how the modern Web works. This often requires building new and adjusting existing crawling setups, which has led to a wide variety of analysis tools for different (but related) aspects. If these efforts are not sufficiently documented, the reproducibility and replicability of the measurements may suffer---two properties that are crucial to sustainable research.
In this paper, we survey 117 recent research papers to derive best practices for Web-based measurement studies and specify criteria that need to be met in practice.
When applying these criteria to the surveyed papers, we find that the experimental setup and other aspects essential to reproducing and replicating results are often missing.
We underline the criticality of this finding by performing a large-scale Web measurement study on4.5 million pages with 24 different measurement setups to demonstrate the influence of the individual criteria. Our experiments show that slight differences in the experimental setup directly affect the overall results and must be documented accurately and carefully.
TechnicalRemarks: This dataset holds additional material to the paper "Reproducibility and Replicability of Web Measurement Studies" submitted to the ACM Web Conference 2022. It contains the measurement data (requests, responses, visited URLs, cookies, and LocalStorage objects) we have collected from 25 different profiles. All data is in CSV format (exported from the Google BigQuery service) and can be imported into any database.
Table sizes (according to Google BigQuery):
Cookies: 2.8 GB
LocalStorage: 6 GB
Requests: 626.6 GB
Responses: 501.6 GB
URL: 38 MB
Visits: 935 MB
Note: Although our paper does not include the analysis for the collected Cookie and LocalStorage objects, we publish them for further studies.
You can find further information about our study on our repository in GitHub.