Advancing Vision-based Human Action Recognition: Exploring Vision-Language CLIP Model for Generalisation in Domain-Independent Tasks

📅 2025-07-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional CNNs and RNNs exhibit weak generalization and poor robustness in medical action recognition (e.g., fall detection, surgical skill assessment), particularly under domain shifts and partial occlusion. Method: This paper investigates the adaptability of CLIP for cross-domain visual action recognition and proposes three masking strategies—percentage/shape-based masking, feature masking, and region isolation—along with class-specific noise injection and a customized contrastive loss to enhance focus on discriminative action features and mitigate bias. Contribution/Results: Systematic evaluation on UCF-101 reveals inherent instability of vanilla CLIP in action classification. Our approach significantly improves classification accuracy and prediction confidence, demonstrating superior robustness under critical visual cue degradation and enhanced cross-domain generalization capability—especially relevant for safety-critical medical applications.

Technology Category

Application Category

📝 Abstract
Human action recognition plays a critical role in healthcare and medicine, supporting applications such as patient behavior monitoring, fall detection, surgical robot supervision, and procedural skill assessment. While traditional models like CNNs and RNNs have achieved moderate success, they often struggle to generalize across diverse and complex actions. Recent advancements in vision-language models, especially the transformer-based CLIP model, offer promising capabilities for generalizing action recognition from video data. In this work, we evaluate CLIP on the UCF-101 dataset and systematically analyze its performance under three masking strategies: (1) percentage-based and shape-based black masking at 10%, 30%, and 50%, (2) feature-specific masking to suppress bias-inducing elements, and (3) isolation masking that retains only class-specific regions. Our results reveal that CLIP exhibits inconsistent behavior and frequent misclassifications, particularly when essential visual cues are obscured. To overcome these limitations, we propose incorporating class-specific noise, learned via a custom loss function, to reinforce attention to class-defining features. This enhancement improves classification accuracy and model confidence while reducing bias. We conclude with a discussion on the challenges of applying such models in clinical domains and outline directions for future work to improve generalizability across domain-independent healthcare scenarios.
Problem

Research questions and friction points this paper is trying to address.

Improving vision-based human action recognition generalization
Addressing CLIP model limitations in diverse action classification
Enhancing accuracy in domain-independent healthcare applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Vision-Language CLIP model
Implements three masking strategies
Incorporates class-specific noise enhancement
🔎 Similar Papers
No similar papers found.
Sanyam Jain
Sanyam Jain
PhD Aarhus University
GenAI
M
Marsha Mariya Kappan
University of New South Wales, Sydney NSW 2033, Australia
V
Vijeta Sharma
Norwegian University of Science and Technology, 2815 Gjøvik, Norway