🤖 AI Summary
Existing video-based epilepsy detection methods suffer from limited generalization, susceptibility to background distractions, and reliance on subject-specific appearance cues. To address these limitations, this work proposes a joint-centric attention model that leverages only human skeletal dynamics. The approach first extracts video clips centered on body joints, tokenizes them using the Video Vision Transformer (ViViT), and incorporates a cross-joint attention mechanism to capture spatiotemporal coordination patterns among body parts. By focusing exclusively on joint motion, the method effectively eliminates background bias and achieves superior performance under cross-subject evaluation settings, significantly outperforming state-of-the-art CNNs, graph neural networks, and Transformer-based approaches in generalizing to unseen subjects.
📝 Abstract
Automated seizure detection from long-term clinical videos can substantially reduce manual review time and enable real-time monitoring. However, existing video-based methods often struggle to generalize to unseen subjects due to background bias and reliance on subject-specific appearance cues. We propose a joint-centric attention model that focuses exclusively on body dynamics to improve cross-subject generalization. For each video segment, body joints are detected and joint-centered clips are extracted, suppressing background context. These joint-centered clips are tokenized using a Video Vision Transformer (ViViT), and cross-joint attention is learned to model spatial and temporal interactions between body parts, capturing coordinated movement patterns characteristic of seizure semiology. Extensive cross-subject experiments show that the proposed method consistently outperforms state-of-the-art CNN-, graph-, and transformer-based approaches on unseen subjects.