🤖 AI Summary
This paper addresses the computational burden of hyperparameter optimization in regularized estimation for system identification. To this end, we propose two novel parameter-free estimators: a generalized Bayesian estimator and a closed-form biased estimator. Theoretically, we construct, for the first time, a family of hyperparameter-free estimators whose excess mean squared error (MSE) strictly matches that of the optimal empirical Bayes regularized estimator. Methodologically, leveraging large-sample asymptotics, ridge regression modeling, and bias–variance trade-off analysis, we derive analytical closed-form solutions—bypassing iterative optimization entirely. Numerical experiments demonstrate that the proposed estimators achieve MSE performance comparable to the best empirical Bayes method, while reducing computational time significantly. This makes them particularly suitable for real-time applications and resource-constrained environments.
📝 Abstract
Regularized system identification has become a significant complement to more classical system identification. It has been numerically shown that kernel-based regularized estimators often perform better than the maximum likelihood estimator in terms of minimizing mean squared error (MSE). However, regularized estimators often require hyper-parameter estimation. This paper focuses on ridge regression and the regularized estimator by employing the empirical Bayes hyper-parameter estimator. We utilize the excess MSE to quantify the MSE difference between the empirical-Bayes-based regularized estimator and the maximum likelihood estimator for large sample sizes. We then exploit the excess MSE expressions to develop both a family of generalized Bayes estimators and a family of closed-form biased estimators. They have the same excess MSE as the empirical-Bayes-based regularized estimator but eliminate the need for hyper-parameter estimation. Moreover, we conduct numerical simulations to show that the performance of these new estimators is comparable to the empirical-Bayes-based regularized estimator, while computationally, they are more efficient.