Unveiling Statistical Significance of Online Regression over Multiple Datasets

📅 2025-12-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing comparative evaluations of online regression models across multiple datasets lack rigorous statistical significance testing, particularly under dynamic environments with concept drift. Method: This paper introduces the first principled hypothesis-testing framework for online regression in dynamic settings. It systematically adapts the Friedman test and Nemenyi post-hoc test to multi-dataset online regression evaluation, integrating real and synthetic data, 5-fold cross-validation, and averaging over multiple random seeds to robustly assess convergence stability and adaptation capability under concept drift. Results: Empirical analysis reveals statistically significant performance inconsistencies among mainstream algorithms—including AROW and OGD—highlighting their limited convergence robustness and inadequate responsiveness to concept drift. The work delivers a fully reproducible benchmark framework and empirically grounded insights, establishing a statistical foundation for reliability validation and improvement of online learning algorithms.

Technology Category

Application Category

📝 Abstract
Despite extensive focus on techniques for evaluating the performance of two learning algorithms on a single dataset, the critical challenge of developing statistical tests to compare multiple algorithms across various datasets has been largely overlooked in most machine learning research. Additionally, in the realm of Online Learning, ensuring statistical significance is essential to validate continuous learning processes, particularly for achieving rapid convergence and effectively managing concept drifts in a timely manner. Robust statistical methods are needed to assess the significance of performance differences as data evolves over time. This article examines the state-of-the-art online regression models and empirically evaluates several suitable tests. To compare multiple online regression models across various datasets, we employed the Friedman test along with corresponding post-hoc tests. For thorough evaluations, utilizing both real and synthetic datasets with 5-fold cross-validation and seed averaging ensures comprehensive assessment across various data subsets. Our tests generally confirmed the performance of competitive baselines as consistent with their individual reports. However, some statistical test results also indicate that there is still room for improvement in certain aspects of state-of-the-art methods.
Problem

Research questions and friction points this paper is trying to address.

Develop statistical tests for comparing multiple online regression algorithms across various datasets.
Ensure statistical significance in online learning for rapid convergence and concept drift management.
Evaluate state-of-the-art online regression models using Friedman and post-hoc tests on diverse datasets.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Friedman test with post-hoc for multi-dataset comparison
Real and synthetic datasets with cross-validation and averaging
Statistical significance validation in online regression models
🔎 Similar Papers
No similar papers found.
M
Mohammad Abu-Shaira
Computer Science and Engineering, The University of North Texas, Denton, United States
Weishi Shi
Weishi Shi
University of North Texas
Data miningMachine learningActive learning.