🤖 AI Summary
This work addresses the clinical challenge of pressure ulcer prevention by tackling two key limitations in monocular depth-to-pressure mapping: physical implausibility and poor real-time performance. We propose the Latent-space Brownian Bridge Diffusion Model (LBBDM), introducing two novel components—the Informed Latent Space (ILS) and Weight-Optimized Loss (WOL)—which jointly integrate conditional Brownian bridge diffusion, physics-based regularization, and generative inverse problem solving. Crucially, mechanical priors are embedded directly into latent-space modeling. Our method significantly improves physical consistency of generated pressure maps (+32% on a mechanics consistency metric), achieves 4.8× faster inference than the baseline BBDM, and maintains >92% reconstruction fidelity. To our knowledge, this is the first approach enabling high-fidelity, physically interpretable, and real-time-deployable pressure distribution estimation—establishing a new paradigm for contactless, dynamic supine posture monitoring.
📝 Abstract
Monitoring contact pressure in hospital beds is essential for preventing pressure ulcers and enabling real-time patient assessment. Current methods can predict pressure maps but often lack physical plausibility, limiting clinical reliability. This work proposes a framework that enhances plausibility via Informed Latent Space (ILS) and Weight Optimization Loss (WOL) with generative modeling to produce high-fidelity, physically consistent pressure estimates. This study also applies diffusion based conditional Brownian Bridge Diffusion Model (BBDM) and proposes training strategy for its latent counterpart Latent Brownian Bridge Diffusion Model (LBBDM) tailored for pressure synthesis in lying postures. Experiment results shows proposed method improves physical plausibility and performance over baselines: BBDM with ILS delivers highly detailed maps at higher computational cost and large inference time, whereas LBBDM provides faster inference with competitive performance. Overall, the approach supports non-invasive, vision-based, real-time patient monitoring in clinical environments.