๐ค AI Summary
In online test-time adaptation (TTA) for 3D human pose estimation, self-supervised learning suffers from error accumulation and performance degradation due to reliance on imperfect predictions. To address this, we propose a long-horizon TTA framework grounded in motion discretization. Our approach comprises three key innovations: (1) mapping continuous motion sequences into a discrete latent space via unsupervised clustering to extract semantically stable anchor actions; (2) introducing an exponential moving averageโdriven soft reset mechanism that dynamically suppresses error propagation during model rollback; and (3) incorporating a self-replay strategy to enhance temporal consistency modeling for out-of-distribution video streams. Evaluated on extended test sequences, our method significantly outperforms existing TTA approaches, enabling robust and persistent modeling of subject-specific morphology and motion dynamics. It demonstrates both stability and consistent accuracy improvement over prolonged adaptation periods.
๐ Abstract
Online test-time adaptation addresses the train-test domain gap by adapting the model on unlabeled streaming test inputs before making the final prediction. However, online adaptation for 3D human pose estimation suffers from error accumulation when relying on self-supervision with imperfect predictions, leading to degraded performance over time. To mitigate this fundamental challenge, we propose a novel solution that highlights the use of motion discretization. Specifically, we employ unsupervised clustering in the latent motion representation space to derive a set of anchor motions, whose regularity aids in supervising the human pose estimator and enables efficient self-replay. Additionally, we introduce an effective and efficient soft-reset mechanism by reverting the pose estimator to its exponential moving average during continuous adaptation. We examine long-term online adaptation by continuously adapting to out-of-domain streaming test videos of the same individual, which allows for the capture of consistent personal shape and motion traits throughout the streaming observation. By mitigating error accumulation, our solution enables robust exploitation of these personal traits for enhanced accuracy. Experiments demonstrate that our solution outperforms previous online test-time adaptation methods and validate our design choices.