Generalizing Linear Autoencoder Recommenders with Decoupled Expected Quadratic Loss

📅 2026-03-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing linear autoencoder-based recommendation models, such as EDLAE, which are constrained by a specific hyperparameter setting (b = 0) that restricts their representational capacity and performance. The authors propose Decoupled Expected Quadratic Loss (DEQL), a generalization of the EDLAE objective to the broader case where b > 0, and derive—for the first time—a closed-form solution under this setting, substantially expanding the solution space. Leveraging Miller’s matrix inversion theorem, they design an efficient algorithm to ensure computational tractability. Experimental results on standard recommendation benchmarks demonstrate that DEQL with b > 0 significantly outperforms the original EDLAE baseline, confirming the effectiveness and performance advantages of the proposed approach.

Technology Category

Application Category

📝 Abstract
Linear autoencoders (LAEs) have gained increasing popularity in recommender systems due to their simplicity and strong empirical performance. Most LAE models, including the Emphasized Denoising Linear Autoencoder (EDLAE) introduced by (Steck, 2020), use quadratic loss during training. However, the original EDLAE only provides closed-form solutions for the hyperparameter choice $b = 0$, which limits its capacity. In this work, we generalize EDLAE objective into a Decoupled Expected Quadratic Loss (DEQL). We show that DEQL simplifies the process of deriving EDLAE solutions and reveals solutions in a broader hyperparameter range $b>0$, which were not derived in Steck's original paper. Additionally, we propose an efficient algorithm based on Miller's matrix inverse theorem to ensure the computational tractability for the $b>0$ case. Empirical results on benchmark datasets show that the $b>0$ solutions provided by DEQL outperform the $b = 0$ EDLAE baseline, demonstrating that DEQL expands the solution space and enables the discovery of models with better testing performance.
Problem

Research questions and friction points this paper is trying to address.

Linear Autoencoders
Recommender Systems
Quadratic Loss
Hyperparameter Limitation
Closed-form Solutions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decoupled Expected Quadratic Loss
Linear Autoencoder
Recommender Systems
Closed-form Solution
Matrix Inversion
🔎 Similar Papers
No similar papers found.