PainFormer: a Vision Foundation Model for Automatic Pain Assessment

πŸ“… 2025-05-02
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the clinical challenge of automatic, continuous, and robust pain intensity assessment, this paper introduces Pain-VFMβ€”the first vision foundation model specifically designed for pain evaluation. Pain-VFM is pre-trained via multi-task learning on 14 diverse datasets comprising 10.9 million samples, supporting multimodal inputs including RGB, synthetic thermal imaging, depth videos, and physiological signals (ECG, EMG, GSR, fNIRS). It features a novel pain-specialized Vision Transformer architecture and an Embedding-Mixer module for cross-modal fusion, enabling general-purpose representation learning and plug-and-play task adaptation. Evaluated on BioVid and AI4Pain benchmarks, Pain-VFM outperforms all 73 baseline methods across both unimodal and multimodal settings, achieving state-of-the-art performance. Its design significantly enhances generalization across heterogeneous clinical populations and robustness under real-world deployment conditions.

Technology Category

Application Category

πŸ“ Abstract
Pain is a manifold condition that impacts a significant percentage of the population. Accurate and reliable pain evaluation for the people suffering is crucial to developing effective and advanced pain management protocols. Automatic pain assessment systems provide continuous monitoring and support decision-making processes, ultimately aiming to alleviate distress and prevent functionality decline. This study introduces PainFormer, a vision foundation model based on multi-task learning principles trained simultaneously on 14 tasks/datasets with a total of 10.9 million samples. Functioning as an embedding extractor for various input modalities, the foundation model provides feature representations to the Embedding-Mixer, a transformer-based module that performs the final pain assessment. Extensive experiments employing behavioral modalities-including RGB, synthetic thermal, and estimated depth videos-and physiological modalities such as ECG, EMG, GSR, and fNIRS revealed that PainFormer effectively extracts high-quality embeddings from diverse input modalities. The proposed framework is evaluated on two pain datasets, BioVid and AI4Pain, and directly compared to 73 different methodologies documented in the literature. Experiments conducted in unimodal and multimodal settings demonstrate state-of-the-art performances across modalities and pave the way toward general-purpose models for automatic pain assessment.
Problem

Research questions and friction points this paper is trying to address.

Develops automatic pain assessment for effective management
Integrates multi-modal data for comprehensive pain evaluation
Achieves state-of-the-art performance across diverse datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-task learning with 14 tasks/datasets
Transformer-based Embedding-Mixer for assessment
Handles diverse behavioral and physiological inputs
πŸ”Ž Similar Papers
No similar papers found.
S
Stefanos Gkikas
Hellenic Mediterranean University, Department of Electrical and Computer Engineering, Heraklion, Crete 714 10, Greece and Institute of Computer Science, Foundation for Research & Technology-Hellas, Heraklion, Crete GR-70013 Greece
Raul Fernandez Rojas
Raul Fernandez Rojas
University of Canberra, Australia
Signal ProcessingfNIRSMachine LearningEEGMultimodal Sensing
Manolis Tsiknakis
Manolis Tsiknakis
Dept. of Electrical & Computer Engineering, Hellenic Mediterranean University, Greece
Biomedical InformaticseHealthmHealthAffective ComputingBiomedical Signal Processing and Analysis