🤖 AI Summary
Traditional approaches to retrospective analysis of high-dimensional operational time-series data often suffer from either high costs or inaccurate replay. Addressing the need for alternative historical analysis, this work systematically identifies and leverages three key properties—decomposability of statistics, sparsity in attribute combinations, and efficiency of aggregation operations—to design a specialized storage and computation architecture. By exploiting these characteristics, the proposed method achieves 100% analytical accuracy while substantially reducing total cost of ownership. Empirical evaluations across multiple real-world datasets and production pipelines demonstrate up to an 85-fold reduction in cost compared to conventional solutions, highlighting the efficacy and practicality of the approach in large-scale operational environments.
📝 Abstract
Many operational systems collect high-dimensional timeseries data about users/systems on key performance metrics. For instance, ISPs, content distribution networks, and video delivery services collect quality of experience metrics for user sessions associated with metadata (e.g., location, device, ISP). Over such historical data, operators and data analysts often need to run retrospective analysis; e.g., analyze anomaly detection algorithms, experiment with different configurations for alerts, evaluate new algorithms, and so on. We refer to this class of workloads as alternative history analysis for operational datasets. We show that in such settings, traditional data processing solutions (e.g., data warehouses, sampling, sketching, big-data systems) either pose high operational costs or do not guarantee accurate replay. We design and implement a system, called AHA (Alternative History Analytics), that overcomes both challenges to provide cost efficiency and fidelity for high-dimensional data. The design of AHA is based on analytical and empirical insights about such workloads: 1) the decomposability of underlying statistics; 2) sparsity in terms of active number of subpopulations over attribute-value combinations; and 3) efficiency structure of aggregation operations in modern analytics databases. Using multiple real-world datasets and as well as case-studies on production pipelines at a large video analytics company, we show that AHA provides 100% accuracy for a broad range of downstream tasks and up to 85x lower total cost of ownership (i.e., compute + storage) compared to conventional methods.