Dynamic Uncertainty-aware Multimodal Fusion for Outdoor Health Monitoring

📅 2025-08-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In outdoor dynamic environments, health monitoring suffers from low robustness due to severe sensor noise, strong physiological signal fluctuations, and frequent multimodal data missing. To address this, we propose DUAL-Health, an uncertainty-aware multimodal fusion framework. Its core innovations include: (1) the first introduction of a dynamic uncertainty quantification mechanism that jointly assesses input noise and signal volatility via temporal and instantaneous features; (2) an uncertainty-calibrated adaptive modality weighting strategy, coupled with cross-modal distribution alignment in a shared semantic space. Built upon a multimodal large language model (MLLM), DUAL-Health integrates an uncertainty estimation network, an adaptive fusion module, and spatiotemporal alignment mechanisms to enable noise-aware dynamic modeling and missing-data recovery. Experiments demonstrate that DUAL-Health significantly outperforms state-of-the-art methods under high-noise and multimodal missing conditions, achieving substantial improvements in both detection accuracy and robustness.

Technology Category

Application Category

📝 Abstract
Outdoor health monitoring is essential to detect early abnormal health status for safeguarding human health and safety. Conventional outdoor monitoring relies on static multimodal deep learning frameworks, which requires extensive data training from scratch and fails to capture subtle health status changes. Multimodal large language models (MLLMs) emerge as a promising alternative, utilizing only small datasets to fine-tune pre-trained information-rich models for enabling powerful health status monitoring. Unfortunately, MLLM-based outdoor health monitoring also faces significant challenges: I) sensor data contains input noise stemming from sensor data acquisition and fluctuation noise caused by sudden changes in physiological signals due to dynamic outdoor environments, thus degrading the training performance; ii) current transformer based MLLMs struggle to achieve robust multimodal fusion, as they lack a design for fusing the noisy modality; iii) modalities with varying noise levels hinder accurate recovery of missing data from fluctuating distributions. To combat these challenges, we propose an uncertainty-aware multimodal fusion framework, named DUAL-Health, for outdoor health monitoring in dynamic and noisy environments. First, to assess the impact of noise, we accurately quantify modality uncertainty caused by input and fluctuation noise with current and temporal features. Second, to empower efficient muitimodal fusion with low-quality modalities,we customize the fusion weight for each modality based on quantified and calibrated uncertainty. Third, to enhance data recovery from fluctuating noisy modalities, we align modality distributions within a common semantic space. Extensive experiments demonstrate that our DUAL-Health outperforms state-of-the-art baselines in detection accuracy and robustness.
Problem

Research questions and friction points this paper is trying to address.

Addressing noise in sensor data for health monitoring
Improving multimodal fusion in noisy outdoor environments
Enhancing data recovery from fluctuating noisy modalities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quantify modality uncertainty using current and temporal features
Customize fusion weights based on calibrated uncertainty
Align modality distributions in common semantic space
🔎 Similar Papers
No similar papers found.
Z
Zihan Fang
Hong Kong JC STEM Lab of Smart City and Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong SAR, China
Z
Zheng Lin
Department of Electrical and Electronic Engineering, The University of Hong Kong, Pok Fu Lam, Hong Kong, China
S
Senkang Hu
Hong Kong JC STEM Lab of Smart City and Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong SAR, China
Yihang Tao
Yihang Tao
City University of Hong Kong
Collaborative PerceptionAutonomous DrivingWorld Model
Yiqin Deng
Yiqin Deng
City University of Hong Kong
UAV-enabled Computing Power NetworksResource Scheduling in Edge ComputingEdge AI
Xianhao Chen
Xianhao Chen
Assistant Professor, The University of Hong Kong
Wireless networksmobile edge computingedge AIdistributed learning
Y
Yuguang Fang
Hong Kong JC STEM Lab of Smart City and Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong SAR, China