Skeleton-to-Image Encoding: Enabling Skeleton Representation Learning via Vision-Pretrained Models

📅 2026-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges posed by heterogeneous formats, data scarcity, and the incompatibility with visual pre-trained models in 3D human skeleton sequence analysis. To overcome these limitations, the authors propose a Skeleton-to-Image (S2I) encoding method that partitions and reorders joints according to body semantics, thereby transforming arbitrary-format skeleton sequences into standardized images. This formulation enables, for the first time, the direct utilization of off-the-shelf vision pre-trained models for self-supervised skeleton representation learning. By decoupling from rigid assumptions about specific skeleton structures, S2I effectively transfers knowledge from the visual domain and achieves state-of-the-art performance on NTU-60, NTU-120, and PKU-MMD benchmarks, demonstrating particularly significant gains in cross-format action recognition scenarios.

Technology Category

Application Category

📝 Abstract
Recent advances in large-scale pretrained vision models have demonstrated impressive capabilities across a wide range of downstream tasks, including cross-modal and multi-modal scenarios. However, their direct application to 3D human skeleton data remains challenging due to fundamental differences in data format. Moreover, the scarcity of large-scale skeleton datasets and the need to incorporate skeleton data into multi-modal action recognition without introducing additional model branches present significant research opportunities. To address these challenges, we introduce Skeleton-to-Image Encoding (S2I), a novel representation that transforms skeleton sequences into image-like data by partitioning and arranging joints based on body-part semantics and resizing to standardized image dimensions. This encoding enables, for the first time, the use of powerful vision-pretrained models for self-supervised skeleton representation learning, effectively transferring rich visual-domain knowledge to skeleton analysis. While existing skeleton methods often design models tailored to specific, homogeneous skeleton formats, they overlook the structural heterogeneity that naturally arises from diverse data sources. In contrast, our S2I representation offers a unified image-like format that naturally accommodates heterogeneous skeleton data. Extensive experiments on NTU-60, NTU-120, and PKU-MMD demonstrate the effectiveness and generalizability of our method for self-supervised skeleton representation learning, including under challenging cross-format evaluation settings.
Problem

Research questions and friction points this paper is trying to address.

skeleton representation
vision-pretrained models
data heterogeneity
self-supervised learning
multimodal action recognition
Innovation

Methods, ideas, or system contributions that make the work stand out.

Skeleton-to-Image Encoding
vision-pretrained models
self-supervised learning
heterogeneous skeleton data
multi-modal action recognition
🔎 Similar Papers
No similar papers found.