CauSkelNet: Causal Representation Learning for Human Behaviour Analysis

📅 2024-09-23
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Weak model interpretability and the absence of joint dynamics and causal mechanism modeling hinder motion recognition. To address this, we propose the first two-stage skeletal causal discovery framework integrating the PC algorithm with KL divergence. Our method automatically identifies and quantifies causal relationships among skeletal joints, generating interpretable, robust, and scale-invariant skeletal representations; it further incorporates graph convolutional networks for causal-aware action modeling. Evaluated on the EmoPain dataset, our model achieves significant improvements in accuracy, F1-score, and recall—particularly enhancing discriminative capability for protective behaviors. Ablation studies demonstrate strong robustness to variations in training data scale. This work establishes a novel paradigm for human behavior analysis that jointly ensures causal interpretability and biomechanical plausibility.

Technology Category

Application Category

📝 Abstract
Constrained by the lack of model interpretability and a deep understanding of human movement in traditional movement recognition machine learning methods, this study introduces a novel representation learning method based on causal inference to better understand human joint dynamics and complex behaviors. We propose a two-stage framework that combines the Peter-Clark (PC) algorithm and Kullback-Leibler (KL) divergence to identify and quantify causal relationships between joints. Our method effectively captures interactions and produces interpretable, robust representations. Experiments on the EmoPain dataset show that our causal GCN outperforms traditional GCNs in accuracy, F1 score, and recall, especially in detecting protective behaviors. The model is also highly invariant to data scale changes, enhancing its reliability in practical applications. Our approach advances human motion analysis and paves the way for more adaptive intelligent healthcare solutions.
Problem

Research questions and friction points this paper is trying to address.

Enhancing interpretability in human movement recognition models
Quantifying causal relationships between human joints
Improving accuracy in detecting protective behaviors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Causal inference for human joint relationships
Two-stage PC algorithm and KL divergence
Interpretable causal Graph Convolutional Network
Xingrui Gu
Xingrui Gu
Master Student, University of California, Berkeley
Learning TheoryHuman Centered AI
C
Chuyi Jiang
Department of Electrical Engineering, Columbia University
E
Erte Wang
Department of Computer Science, University College London
Zekun Wu
Zekun Wu
Research Scientist, Holistic AI / PhD Student, University College London
Agentic AIResponsible AIBehavioural RobustnessExplainabilityInterpretability
Q
Qiang Cui
The Future Laboratory, Tsinghua University
Leimin Tian
Leimin Tian
senior research scientist at CSIRO, adjunct senior lecturer at Monash University
Human-Robot InteractionAffective ComputingHuman-Centered AIHuman-Robot TeamSocial Robotics
L
Lianlong Wu
Department of Computer Science, Oxford University
S
Siyang Song
Department of Computer Science and Technology, Cambridge University
C
Chuang Yu
UCL Interaction Centre, University College London