OmniFall: A Unified Staged-to-Wild Benchmark for Human Fall Detection

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing fall detection research is hindered by small-scale, lab-controlled datasets exhibiting substantial domain bias, leaving real-world generalization capability largely uncharacterized. To address this, we introduce the first staged-to-wild unified benchmark for fall detection, integrating eight public datasets (56 hours of video) and establishing a ten-class fine-grained annotation schema with standardized evaluation protocols—including the novel OOPS-Fall subset comprising real-world accident footage. We propose a cross-dataset-compatible video segmentation annotation paradigm and an out-of-distribution generalization evaluation framework. Our empirical analysis reveals, for the first time, that state-of-the-art pretrained models—including I3D and VideoMAE—suffer over 40% accuracy degradation on real-world data. We fully open-source all data, annotations, and code, providing a reproducible, comparable, and robust evaluation foundation for future fall detection research.

Technology Category

Application Category

📝 Abstract
Current video-based fall detection research mostly relies on small, staged datasets with significant domain biases concerning background, lighting, and camera setup resulting in unknown real-world performance. We introduce OmniFall, unifying eight public fall detection datasets (roughly 14 h of recordings, roughly 42 h of multiview data, 101 subjects, 29 camera views) under a consistent ten-class taxonomy with standardized evaluation protocols. Our benchmark provides complete video segmentation labels and enables fair cross-dataset comparison previously impossible with incompatible annotation schemes. For real-world evaluation we curate OOPS-Fall from genuine accident videos and establish a staged-to-wild protocol measuring generalization from controlled to uncontrolled environments. Experiments with frozen pre-trained backbones such as I3D or VideoMAE reveal significant performance gaps between in-distribution and in-the-wild scenarios, highlighting critical challenges in developing robust fall detection systems. OmniFall Dataset at https://huggingface.co/datasets/simplexsigil2/omnifall , Code at https://github.com/simplexsigil/omnifall-experiments
Problem

Research questions and friction points this paper is trying to address.

Addresses domain biases in staged fall detection datasets
Unifies diverse datasets for fair cross-dataset comparison
Measures generalization from controlled to real-world scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unifies eight datasets under consistent taxonomy
Provides complete video segmentation labels
Establishes staged-to-wild evaluation protocol
🔎 Similar Papers
No similar papers found.
D
David Schneider
Karlsruhe Institute of Technology
Zdravko Marinov
Zdravko Marinov
Ph.D. Student at Karlrsruhe Institute of Technology
interactive segmentationmedical image analysisaction recognitiondomain adaptation
R
Rafael Baur
Karlsruhe Institute of Technology
Z
Zeyun Zhong
Karlsruhe Institute of Technology
R
Rodi Düger
Karlsruhe Institute of Technology
Rainer Stiefelhagen
Rainer Stiefelhagen
Karlsruhe Institute of Technology, Karlsruhe, Germany
Computer visionMultimodal interactionAccessibility