Should I Run My Cloud Benchmark on Black Friday?

📅 2025-10-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Cloud benchmarking suffers from poor reproducibility and credibility due to high performance variability. This paper presents the first empirical study investigating how globally significant events—such as Black Friday—affect the performance of cloud-based stream processing applications. We conduct long-term, repeated benchmark executions and collect multi-dimensional performance data across daily, weekly, and event-aligned cycles, followed by rigorous statistical analysis. Our results reveal observable, application-level periodic performance fluctuations in cloud environments; however, their magnitude is limited. Notably, no significant performance degradation occurs during Black Friday; instead, diurnal variations stem primarily from regular load tidal patterns—not exceptional events. The study uncovers previously overlooked fine-grained periodic performance patterns and, critically, empirically strengthens confidence in cloud benchmarking outcomes. These findings provide foundational insights for designing robust experimental methodologies and evaluation frameworks for cloud systems.

Technology Category

Application Category

📝 Abstract
Benchmarks and performance experiments are frequently conducted in cloud environments. However, their results are often treated with caution, as the presumed high variability of performance in the cloud raises concerns about reproducibility and credibility. In a recent study, we empirically quantified the impact of this variability on benchmarking results by repeatedly executing a stream processing application benchmark at different times of the day over several months. Our analysis confirms that performance variability is indeed observable at the application level, although it is less pronounced than often assumed. The larger scale of our study compared to related work allowed us to identify subtle daily and weekly performance patterns. We now extend this investigation by examining whether a major global event, such as Black Friday, affects the outcomes of performance benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Quantifying cloud performance variability impact on benchmarks
Identifying daily and weekly cloud performance patterns
Assessing global events' effects on cloud benchmark results
Innovation

Methods, ideas, or system contributions that make the work stand out.

Repeatedly executing benchmark at different times
Identifying daily and weekly performance patterns
Examining global events impact on benchmarks
🔎 Similar Papers
No similar papers found.
S
Sören Henning
Dynatrace Research, Linz, Austria
A
Adriano Vogel
Dynatrace Research, Linz, Austria
E
Esteban Perez-Wohlfeil
Dynatrace Research, Linz, Austria
O
Otmar Ertl
Dynatrace Research, Linz, Austria
Rick Rabiser
Rick Rabiser
Professor at LIT CPS Lab, Johannes Kepler University Linz
Software Engineering