🤖 AI Summary
Addressing the longstanding trade-off between accuracy and efficiency in 3D human motion prediction, this paper proposes LuKAN—a novel framework built upon the Kolmogorov–Arnold Network (KAN). LuKAN introduces Lucas polynomials as learnable activation functions for the first time, significantly enhancing modeling capability for joint oscillatory dynamics. It integrates discrete wavelet transform with time-domain inverse reconstruction to efficiently capture long-range temporal dependencies while preserving pose coherence. Additionally, a lightweight spatial projection module and a temporal dependency learning module are designed to improve structural consistency. Evaluated on H3.6M, AMASS, and 3DPW benchmarks, LuKAN achieves state-of-the-art performance with substantially fewer parameters and lower FLOPs. Within the 1000 ms prediction horizon, it reduces average prediction error by 5.2%–8.7% over leading baselines, demonstrating superior accuracy, computational efficiency, and cross-dataset generalizability.
📝 Abstract
The goal of 3D human motion prediction is to forecast future 3D poses of the human body based on historical motion data. Existing methods often face limitations in achieving a balance between prediction accuracy and computational efficiency. In this paper, we present LuKAN, an effective model based on Kolmogorov-Arnold Networks (KANs) with Lucas polynomial activations. Our model first applies the discrete wavelet transform to encode temporal information in the input motion sequence. Then, a spatial projection layer is used to capture inter-joint dependencies, ensuring structural consistency of the human body. At the core of LuKAN is the Temporal Dependency Learner, which employs a KAN layer parameterized by Lucas polynomials for efficient function approximation. These polynomials provide computational efficiency and an enhanced capability to handle oscillatory behaviors. Finally, the inverse discrete wavelet transform reconstructs motion sequences in the time domain, generating temporally coherent predictions. Extensive experiments on three benchmark datasets demonstrate the competitive performance of our model compared to strong baselines, as evidenced by both quantitative and qualitative evaluations. Moreover, its compact architecture coupled with the linear recurrence of Lucas polynomials, ensures computational efficiency.