Collaborative Multi-LoRA Experts with Achievement-based Multi-Tasks Loss for Unified Multimodal Information Extraction

📅 2025-05-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address gradient conflict, sample imbalance, and excessive task coupling in multi-modal information extraction (MIE), this paper proposes a collaborative multi-LoRA expert architecture. It integrates universal cross-task experts with task-specific experts to enable knowledge sharing while decoupling tasks. A dynamic multi-task loss function—guided by task achievement metrics—is introduced to mitigate training asynchrony. The method unifies low-rank adaptation (LoRA), vision adapters, instruction tuning, and achievement-aware learning. Evaluated on seven benchmark datasets across three MIE task categories, our approach significantly outperforms full-parameter fine-tuning and standard LoRA, achieving superior performance with comparable parameter counts. This demonstrates both high parameter efficiency and strong generalization capability.

Technology Category

Application Category

📝 Abstract
Multimodal Information Extraction (MIE) has gained attention for extracting structured information from multimedia sources. Traditional methods tackle MIE tasks separately, missing opportunities to share knowledge across tasks. Recent approaches unify these tasks into a generation problem using instruction-based T5 models with visual adaptors, optimized through full-parameter fine-tuning. However, this method is computationally intensive, and multi-task fine-tuning often faces gradient conflicts, limiting performance. To address these challenges, we propose collaborative multi-LoRA experts with achievement-based multi-task loss (C-LoRAE) for MIE tasks. C-LoRAE extends the low-rank adaptation (LoRA) method by incorporating a universal expert to learn shared multimodal knowledge from cross-MIE tasks and task-specific experts to learn specialized instructional task features. This configuration enhances the model's generalization ability across multiple tasks while maintaining the independence of various instruction tasks and mitigating gradient conflicts. Additionally, we propose an achievement-based multi-task loss to balance training progress across tasks, addressing the imbalance caused by varying numbers of training samples in MIE tasks. Experimental results on seven benchmark datasets across three key MIE tasks demonstrate that C-LoRAE achieves superior overall performance compared to traditional fine-tuning methods and LoRA methods while utilizing a comparable number of training parameters to LoRA.
Problem

Research questions and friction points this paper is trying to address.

Unifies multimodal extraction tasks to avoid separate processing
Reduces computational cost and gradient conflicts in multi-task learning
Balances training progress across tasks with varying sample sizes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Collaborative multi-LoRA experts for unified MIE tasks
Achievement-based multi-task loss balances training progress
Universal and task-specific experts enhance generalization
🔎 Similar Papers
No similar papers found.
Li Yuan
Li Yuan
Research Associate, University of Science & Technology of China (USTC)
Antibiotic resistanceWastewater treatmentEnvironmental bioremediationAnaerobic digestionFate of organic pollutants
Y
Yi Cai
Key Laboratory of Big Data and Intelligent Robot (SCUT), MOE of China; School of Software Engineering, South China University of Technology, Guangzhou, China
X
Xudong Shen
School of Software Engineering, South China University of Technology, Guangzhou, China
Q
Qing Li
Department of Computing, The Hong Kong Polytechnic University, Hong Kong, China
Qingbao Huang
Qingbao Huang
Guangxi University
AI
Z
Zikun Deng
School of Software Engineering, South China University of Technology, Guangzhou, China; Key Laboratory of Big Data and Intelligent Robot (SCUT), MOE of China
T
Tao Wang
Department of Biostatistics & Health Informatics, King’s College London, London, United Kingdom