Online Linear Regression with Paid Stochastic Features

📅 2025-11-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper studies online linear regression under noisy observations, where the learner can pay to reduce observation noise and must balance prediction error against payment cost to minimize cumulative regret under a joint loss. We propose the first “payment-adjustable noise” online learning framework and rigorously characterize the optimal regret rates: $O(sqrt{T})$ when the noise covariance is known, and $O(T^{2/3})$ when it is unknown. Technically, we derive uniform convergence bounds for the empirical loss over both the payment policy space and the linear predictor space, leveraging matrix martingale concentration inequalities. Our key contributions are: (i) establishing the fundamental optimal regret order for online linear regression with controllable noise; (ii) revealing how the information structure—specifically, whether the noise covariance is known—affects inherent learning difficulty; and (iii) providing principled algorithmic design guidelines that achieve these optimal rates.

Technology Category

Application Category

📝 Abstract
We study an online linear regression setting in which the observed feature vectors are corrupted by noise and the learner can pay to reduce the noise level. In practice, this may happen for several reasons: for example, because features can be measured more accurately using more expensive equipment, or because data providers can be incentivized to release less private features. Assuming feature vectors are drawn i.i.d. from a fixed but unknown distribution, we measure the learner's regret against the linear predictor minimizing a notion of loss that combines the prediction error and payment. When the mapping between payments and noise covariance is known, we prove that the rate $sqrt{T}$ is optimal for regret if logarithmic factors are ignored. When the noise covariance is unknown, we show that the optimal regret rate becomes of order $T^{2/3}$ (ignoring log factors). Our analysis leverages matrix martingale concentration, showing that the empirical loss uniformly converges to the expected one for all payments and linear predictors.
Problem

Research questions and friction points this paper is trying to address.

Online linear regression with feature noise that can be reduced through payments
Learner pays to decrease noise in observed feature vectors for better predictions
Analyzes regret against optimal predictor considering both prediction error and payment costs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online linear regression with paid feature noise reduction
Regret analysis with known and unknown noise covariance
Matrix martingale concentration for empirical loss convergence
🔎 Similar Papers
No similar papers found.