🤖 AI Summary
This work addresses the limitation of conventional time series forecasting models that employ channel-agnostic loss functions—such as mean squared error—which often fail to capture channel-specific dynamic characteristics like abrupt changes or trend reversals. To overcome this, the authors propose Channel-aware Perception Loss (CP Loss), which introduces learnable channel-wise filters to decompose multi-scale signals and construct a dedicated perception space for each channel, wherein the loss is computed. This mechanism is jointly optimized in an end-to-end manner with the main forecasting model, substantially enhancing its capacity to model heterogeneous dynamics across multiple channels. Experimental results demonstrate consistent improvements in overall prediction accuracy, with particularly notable gains on channels exhibiting sharp fluctuations or complex trend transitions.
📝 Abstract
Multi-channel time-series data, prevalent across diverse applications, is characterized by significant heterogeneity in its different channels. However, existing forecasting models are typically guided by channel-agnostic loss functions like MSE, which apply a uniform metric across all channels. This often leads to fail to capture channel-specific dynamics such as sharp fluctuations or trend shifts. To address this, we propose a Channel-wise Perceptual Loss (CP Loss). Its core idea is to learn a unique perceptual space for each channel that is adapted to its characteristics, and to compute the loss within this space. Specifically, we first design a learnable channel-wise filter that decomposes the raw signal into disentangled multi-scale representations, which form the basis of our perceptual space. Crucially, the filter is optimized jointly with the main forecasting model, ensuring that the learned perceptual space is explicitly oriented towards the prediction task. Finally, losses are calculated within these perception spaces to optimize the model. Code is available at https://github.com/zyh16143998882/CP_Loss.