SUGAR: Learning Skeleton Representation with Visual-Motion Knowledge for Action Recognition

📅 2025-11-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that large language models (LLMs) struggle to interpret skeletal data and generate semantically meaningful action descriptions. To this end, we propose SUGAR—a novel framework that introduces vision-motor priors into skeleton representation learning for the first time; it achieves semantic alignment from skeleton to language by freezing the LLM backbone. We design a Temporal Query Projection (TQP) module to efficiently model long-range skeletal dynamics. SUGAR supports zero-shot transfer and generates discrete action representations. On multiple skeleton-based action recognition benchmarks, SUGAR significantly outperforms linear baselines, demonstrating strong generalization under zero-shot settings. Its core contribution lies in establishing a new paradigm for skeleton understanding—jointly integrating vision, motor dynamics, and language—thereby bridging the gap between low-level pose signals and high-level action semantics.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) hold rich implicit knowledge and powerful transferability. In this paper, we explore the combination of LLMs with the human skeleton to perform action classification and description. However, when treating LLM as a recognizer, two questions arise: 1) How can LLMs understand skeleton? 2) How can LLMs distinguish among actions? To address these problems, we introduce a novel paradigm named learning Skeleton representation with visUal-motion knowledGe for Action Recognition (SUGAR). In our pipeline, we first utilize off-the-shelf large-scale video models as a knowledge base to generate visual, motion information related to actions. Then, we propose to supervise skeleton learning through this prior knowledge to yield discrete representations. Finally, we use the LLM with untouched pre-training weights to understand these representations and generate the desired action targets and descriptions. Notably, we present a Temporal Query Projection (TQP) module to continuously model the skeleton signals with long sequences. Experiments on several skeleton-based action classification benchmarks demonstrate the efficacy of our SUGAR. Moreover, experiments on zero-shot scenarios show that SUGAR is more versatile than linear-based methods.
Problem

Research questions and friction points this paper is trying to address.

Enabling LLMs to understand skeleton data for action recognition
Distinguishing between different human actions using skeleton representations
Learning skeleton representations with visual-motion knowledge transfer
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using video models to generate visual-motion knowledge
Supervising skeleton learning with prior action knowledge
Employing LLMs with pre-trained weights for recognition
🔎 Similar Papers
No similar papers found.
Q
Qilang Ye
VCIP & TMCC & DISSec, College of Computer Science & College of Cryptology and Cyber Science, Nankai University
Y
Yu Zhou
VCIP & TMCC & DISSec, College of Computer Science & College of Cryptology and Cyber Science, Nankai University
L
Lian He
Beijing Zhongguancun Academy
J
Jie Zhang
Great Bay University
X
Xuanming Guo
Beijing Zhongguancun Academy
J
Jiayu Zhang
Great Bay University
Mingkui Tan
Mingkui Tan
South China University of Technology
Machine LearningLarge-scale Optimization
Weicheng Xie
Weicheng Xie
Associate Professor, Shenzhen University
Facial expression analysisDeep learningImage processing
Y
Yue Sun
Macao Polytechnic University
Tao Tan
Tao Tan
FCA MPU
Medical Imaging AI
X
Xiaochen Yuan
Macao Polytechnic University
Ghada Khoriba
Ghada Khoriba
Nile University
Zitong Yu
Zitong Yu
U.S. Food and Drug Administration
Medical imagingDeep learningMachine learningImage reconstruction