Hedging Is Not All You Need: A Simple Baseline for Online Learning Under Haphazard Inputs

📅 2024-09-16
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of noisy streaming data (e.g., missing values, outliers, emergent inputs) generated by edge devices and location-agnostic, variable-length-window online learning, this paper proposes HapNet—a lightweight, backpropagation-free online model. Methodologically, HapNet innovatively formalizes hedging as weighted residual connections, approximated via a pure self-attention mechanism; it further introduces an incremental update paradigm supporting dynamic window scaling and adaptive multimodal input processing. Crucially, HapNet eliminates gradient backpropagation, drastically reducing computational overhead on resource-constrained edge devices. Evaluated on five benchmark datasets, HapNet achieves state-of-the-art (SOTA) performance, demonstrating superior robustness—particularly under variable-window scenarios—compared to existing hedging-based approaches.

Technology Category

Application Category

📝 Abstract
Handling haphazard streaming data, such as data from edge devices, presents a challenging problem. Over time, the incoming data becomes inconsistent, with missing, faulty, or new inputs reappearing. Therefore, it requires models that are reliable. Recent methods to solve this problem depend on a hedging-based solution and require specialized elements like auxiliary dropouts, forked architectures, and intricate network design. We observed that hedging can be reduced to a special case of weighted residual connection; this motivated us to approximate it with plain self-attention. In this work, we propose HapNet, a simple baseline that is scalable, does not require online backpropagation, and is adaptable to varying input types. All present methods are restricted to scaling with a fixed window; however, we introduce a more complex problem of scaling with a variable window where the data becomes positionally uncorrelated, and cannot be addressed by present methods. We demonstrate that a variant of the proposed approach can work even for this complex scenario. We extensively evaluated the proposed approach on five benchmarks and found competitive performance.
Problem

Research questions and friction points this paper is trying to address.

Online Learning
Data Stream Management
Adaptive Modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

HapNet
Self-attention Mechanism
Online Learning
🔎 Similar Papers
No similar papers found.