Mitigating Modality Quantity and Quality Imbalance in Multimodal Online Federated Learning

📅 2025-08-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the prevalent modality quantity and quality imbalance (QQI) problem in multi-modal online federated learning (MMO-FL) on IoT edge devices, this paper first systematically reveals its detrimental impact on model convergence and generalization. We propose QQR, a lightweight, training-embedded rebalancing mechanism that dynamically models modality confidence via prototype learning and adaptively compensates for information loss from low-quality or missing modalities—without requiring additional annotations or data augmentation. The framework supports distributed, continual multi-modal collaborative learning. Experiments on two real-world multi-modal time-series datasets demonstrate that QQR maintains stable convergence under severe modality imbalance and achieves an average accuracy improvement of 4.2–7.8% over state-of-the-art methods, significantly enhancing the robustness and efficiency of federated learning across heterogeneous edge devices.

Technology Category

Application Category

📝 Abstract
The Internet of Things (IoT) ecosystem produces massive volumes of multimodal data from diverse sources, including sensors, cameras, and microphones. With advances in edge intelligence, IoT devices have evolved from simple data acquisition units into computationally capable nodes, enabling localized processing of heterogeneous multimodal data. This evolution necessitates distributed learning paradigms that can efficiently handle such data. Furthermore, the continuous nature of data generation and the limited storage capacity of edge devices demand an online learning framework. Multimodal Online Federated Learning (MMO-FL) has emerged as a promising approach to meet these requirements. However, MMO-FL faces new challenges due to the inherent instability of IoT devices, which often results in modality quantity and quality imbalance (QQI) during data collection. In this work, we systematically investigate the impact of QQI within the MMO-FL framework and present a comprehensive theoretical analysis quantifying how both types of imbalance degrade learning performance. To address these challenges, we propose the Modality Quantity and Quality Rebalanced (QQR) algorithm, a prototype learning based method designed to operate in parallel with the training process. Extensive experiments on two real-world multimodal datasets show that the proposed QQR algorithm consistently outperforms benchmarks under modality imbalance conditions with promising learning performance.
Problem

Research questions and friction points this paper is trying to address.

Addressing modality quantity imbalance in federated learning
Mitigating modality quality imbalance in distributed learning
Improving multimodal online federated learning performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online Federated Learning for multimodal IoT data
Modality Quantity and Quality Rebalanced algorithm
Parallel prototype learning during training
🔎 Similar Papers
No similar papers found.