REMOTE: A Unified Multimodal Relation Extraction Framework with Multilevel Optimal Transport and Mixture-of-Experts

📅 2025-09-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods are limited to extracting single-modal relation triples of pre-defined types, failing to model dynamic cross-modal interactions and suffering from redundancy in compositional pipelines. To address these limitations, we propose REMOTE, a unified multimodal relation extraction framework. REMOTE employs a Mixture-of-Experts (MoE) mechanism to dynamically select optimal intra- and inter-modal interaction features, and integrates a Multilevel Optimal Transport (MOT) fusion module to preserve fine-grained visual–textual alignment at multiple semantic levels. This enables the first joint, fine-grained extraction of textual entities, visual objects, and cross-modal relations. Extensive experiments demonstrate state-of-the-art performance across multiple mainstream multimodal relation extraction (MRE) benchmarks. Furthermore, we introduce UMRE—the first unified multimodal relation extraction benchmark dataset—to rigorously evaluate generalization and practicality. Code, models, and the UMRE dataset will be publicly released.

Technology Category

Application Category

📝 Abstract
Multimodal relation extraction (MRE) is a crucial task in the fields of Knowledge Graph and Multimedia, playing a pivotal role in multimodal knowledge graph construction. However, existing methods are typically limited to extracting a single type of relational triplet, which restricts their ability to extract triplets beyond the specified types. Directly combining these methods fails to capture dynamic cross-modal interactions and introduces significant computational redundancy. Therefore, we propose a novel extit{unified multimodal Relation Extraction framework with Multilevel Optimal Transport and mixture-of-Experts}, termed REMOTE, which can simultaneously extract intra-modal and inter-modal relations between textual entities and visual objects. To dynamically select optimal interaction features for different types of relational triplets, we introduce mixture-of-experts mechanism, ensuring the most relevant modality information is utilized. Additionally, considering that the inherent property of multilayer sequential encoding in existing encoders often leads to the loss of low-level information, we adopt a multilevel optimal transport fusion module to preserve low-level features while maintaining multilayer encoding, yielding more expressive representations. Correspondingly, we also create a Unified Multimodal Relation Extraction (UMRE) dataset to evaluate the effectiveness of our framework, encompassing diverse cases where the head and tail entities can originate from either text or image. Extensive experiments show that REMOTE effectively extracts various types of relational triplets and achieves state-of-the-art performanc on almost all metrics across two other public MRE datasets. We release our resources at https://github.com/Nikol-coder/REMOTE.
Problem

Research questions and friction points this paper is trying to address.

Unified extraction of intra-modal and inter-modal relations
Dynamic selection of cross-modal interaction features
Preservation of low-level information in multilayer encoding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilevel Optimal Transport fusion module
Mixture-of-experts mechanism for modality selection
Unified framework for intra-inter modal relations
🔎 Similar Papers
No similar papers found.
X
Xinkui Lin
Institute of Information Engineering, Chinese Academy of Sciences; School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China
Y
Yongxiu Xu
Institute of Information Engineering, Chinese Academy of Sciences; School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China
M
Minghao Tang
Institute of Information Engineering, Chinese Academy of Sciences; School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China
Shilong Zhang
Shilong Zhang
University of Hong Kong
AIGCMultimodal LLMs
H
Hongbo Xu
Institute of Information Engineering, Chinese Academy of Sciences; School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China
H
Hao Xu
Institute of Information Engineering, Chinese Academy of Sciences; School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China
Y
Yubin Wang
Institute of Information Engineering, Chinese Academy of Sciences; School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China