Rethinking Time Encoding via Learnable Transformation Functions

📅 2025-05-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing time encoding methods—such as sinusoidal encoding—rely on fixed functional forms and strong inductive biases, limiting their capacity to model diverse, nonlinear, and aperiodic temporal patterns prevalent in real-world data. To address this, we propose Learnable Transformative Generalized Time Encoding (LeTE), the first framework that parameterizes the time transformation function and learns it end-to-end via deep neural networks. LeTE enables differentiable, nonlinear time mapping, unifying the representation of periodic, trended, and abrupt temporal dynamics under a single generalized paradigm. It is architecture-agnostic, plug-and-play compatible with mainstream sequence models, and subsumes existing encodings as special cases. Extensive experiments across multiple time series forecasting and classification benchmarks demonstrate consistent and significant performance gains. These results validate LeTE’s superior generalization, robustness to temporal heterogeneity, and broad cross-domain applicability.

Technology Category

Application Category

📝 Abstract
Effectively modeling time information and incorporating it into applications or models involving chronologically occurring events is crucial. Real-world scenarios often involve diverse and complex time patterns, which pose significant challenges for time encoding methods. While previous methods focus on capturing time patterns, many rely on specific inductive biases, such as using trigonometric functions to model periodicity. This narrow focus on single-pattern modeling makes them less effective in handling the diversity and complexities of real-world time patterns. In this paper, we investigate to improve the existing commonly used time encoding methods and introduce Learnable Transformation-based Generalized Time Encoding (LeTE). We propose using deep function learning techniques to parameterize non-linear transformations in time encoding, making them learnable and capable of modeling generalized time patterns, including diverse and complex temporal dynamics. By enabling learnable transformations, LeTE encompasses previous methods as specific cases and allows seamless integration into a wide range of tasks. Through extensive experiments across diverse domains, we demonstrate the versatility and effectiveness of LeTE.
Problem

Research questions and friction points this paper is trying to address.

Improving time encoding for diverse real-world patterns
Overcoming limitations of fixed inductive biases
Generalizing time pattern modeling via learnable transformations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learnable Transformation-based Generalized Time Encoding
Deep function learning for non-linear transformations
Handling diverse and complex temporal dynamics
🔎 Similar Papers
No similar papers found.
X
Xi Chen
Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University, Shanghai, China
Y
Yateng Tang
Tencent Weixin Group, Shenzhen, China
Jiarong Xu
Jiarong Xu
Assistant Professor, Fudan University
graph miningdata mining
J
Jiawei Zhang
IFM Lab, Department of Computer Science, University of California Davis, CA, USA
Siwei Zhang
Siwei Zhang
ETH Zurich
3D human pose estimationhuman-scene interactions
S
Sijia Peng
Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University, Shanghai, China
X
Xuehao Zheng
Tencent Weixin Group, Shenzhen, China
Y
Yun Xiong
Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University, Shanghai, China