Conditional Temporal Neural Processes with Covariance Loss

πŸ“… 2025-04-01
πŸ›οΈ International Conference on Machine Learning
πŸ“ˆ Citations: 15
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Neural processes often fail to accurately model dependencies between inputs and targets under noisy observations. To address this, we propose the Covariance Lossβ€”a novel objective that explicitly incorporates second-order statistical dependencies among target variables into the end-to-end training of conditional neural processes for the first time. By regularizing the covariance structure of the predictive distribution, our loss enhances the model’s ability to recover missing or degraded dependencies and improves robustness to observation noise. The method is architecture-agnostic and can be seamlessly integrated into mainstream neural process frameworks. Extensive experiments across multiple real-world time-series and regression benchmarks demonstrate consistent and significant improvements over state-of-the-art methods in three key aspects: predictive accuracy, fidelity of dependency structure recovery, and robustness to observational noise.

Technology Category

Application Category

πŸ“ Abstract
We introduce a novel loss function, Covariance Loss, which is conceptually equivalent to conditional neural processes and has a form of regularization so that is applicable to many kinds of neural networks. With the proposed loss, mappings from input variables to target variables are highly affected by dependencies of target variables as well as mean activation and mean dependencies of input and target variables. This nature enables the resulting neural networks to become more robust to noisy observations and recapture missing dependencies from prior information. In order to show the validity of the proposed loss, we conduct extensive sets of experiments on real-world datasets with state-of-the-art models and discuss the benefits and drawbacks of the proposed Covariance Loss.
Problem

Research questions and friction points this paper is trying to address.

Introduces Covariance Loss for neural networks
Enhances robustness to noisy observations
Recaptures missing dependencies from prior information
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces Covariance Loss for neural networks
Enhances robustness to noisy observations
Recaptures missing dependencies from prior information
πŸ”Ž Similar Papers
No similar papers found.